Contrast enhancement in emission tomography by way of synergistic PET/CT image combination

https://doi.org/10.1016/j.cmpb.2007.12.009Get rights and content

Abstract

The display of image fusion is well accepted as a powerful tool in visual image analysis and comparison. In clinical practice, this is a mandatory step when studying images from a dual PET/CT scanner. However, the display methods that are implemented on most workstations simply show both images side by side, in separate and synchronized windows. Sometimes images are presented superimposed in a single window, preventing the user from doing quantitative analysis. In this article a new image fusion scheme is presented, allowing performing quantitative analysis directly on the fused images.

Methods

The objective is to preserve the functional information provided by PET while incorporating details of higher resolution from the CT image. The process relies on a discrete wavelet-based image merging: both images are decomposed into successive details layers by using the “à trous” transform. This algorithm performs wavelet decomposition of images and provides coarser and coarser spatial resolution versions of them. The high-spatial frequencies of the CT, or details, can be easily obtained at any level of resolution. A simple model is then inferred to compute the lacking details of the PET scan from the high frequency detail layers of the CT. These details are then incorporated in the PET image on a voxel-to-voxel basis, giving the fused PET/CT image.

Results

Aside from the expected visual enhancement, quantitative comparison of initial PET and CT images with fused images was performed in 12 patients. The obtained results were in accordance with the objectives of the study, in the sense that the organs’ mean intensity in PET was preserved in the fused image.

Conclusion

This alternative approach to PET/CT fusion display should be of interest for people interested in a more quantitative aspect of image fusion. The proposed method is actually complementary to more classical visualization tools.

Introduction

Multimodality imaging has become a mandatory exploration in many clinical applications. PET/CT hybrid scanners constitute today a necessary tool in diagnosis, treatment and staging of cancer [1], [2], [3]. The complementary information provided by this kind of dual imaging device allows revealing the physiological state of malignant tumours by PET while in the same time, the CT image offers anatomical accuracy through high-spatial resolution. One of the specific uses of PET/CT that currently encounters increasing interest is intensity-modulated radiotherapy (IMRT). Recent works tend to prove indeed that PET/CT-guided IMRT improves treatment planning while reducing tissue doses, for example in head and neck cancer [4].

Aside from the problem of spatial co-registration of PET and CT images, which permits their superposition on a voxel-to-voxel basis, the effective management of images in a day-to-day clinical use consists of their visualization. This step is of great importance since it allows comparing the images and making an accurate judgment according to functional and morphological complementarity. Two main types of visualization techniques exist [5]. In the first one, the two images are displayed side by side in two separate windows, with synchronized commands and cursors. This method of display presents the advantage of preserving information but the efficient visual comparison of structures remains difficult. The second visualization approach is the overlay of both images in a single window. Several approaches can be chosen but most of them require two look-up-tables, generally a grey level one for the CT image and a colour one for the PET image. An easy way to proceed is to display a voxel of each image alternately, like a mosaic. Another approach is to blend the images using a single look-up-table but then the intensity in a voxel is a weighted sum of PET and CT intensities in the given voxel. Recent works in this domain include multi-image voxel composting [6] in which the CT image is decomposed into several layers with different ranges of contrast adjustments, each one corresponding to a particular tissue (bone, lungs, soft tissues, etc.). These layers are then weighted and mixed together, finally been blended with the PET image.

In oncology staging or treatment planning it is of decisive importance to follow the evolution of both tumour activity and size. However, such quantitative measurements cannot be directly derived from the fused images since the intensity in a voxel is a mixture of corresponding PET and CT intensities. Even if the anatomical information is preserved to a certain extent, it is impossible to measure the PET intensity in a given region of interest. Moreover, the complementarity aspect of fusion display loses its interest in the sense that it is limited to visual inspection.

By enlarging the field of investigation, one may notice that image fusion concerns many fields, like geosciences [7], food safety [8], fingerprints analysis [9], biometric imaging [10] or forensic investigations [11]. Nevertheless, the objective of all these studies remains largely in the scope of visual enhancement of images without considering the quantitative aspect which is of outmost importance in medical imaging. Some general surveys have been also performed but mostly tackling without specific attention the medical aspect of image fusion algorithms [12].

In this article, we introduce a new fusion display scheme able to preserve the quantitative functional information provided by PET, and in the same time, able to maintain morphological details of the CT. The algorithm is based on multi-resolution analysis of the PET and CT images using wavelets. After presenting the theory and implementation of this method, we apply it on a number of clinical whole-body example image datasets and perform quantitative analysis to demonstrate its potential for preserving relevant information.

Section snippets

Basic theory on continuous wavelet transform (CWT)

For the sake of clarity, definitions are given for an 1D function f, but a more general theory can be found in [14]. The wavelet transform W of an 1D, real, square-integrable function f is defined byW(a,b)=+f(x)ψ*xbadx,where a is the scale of the analysis and b is the parameter of translation corresponding to the position of the wavelet Ψ (Ψ* stands for the complex conjugate of Ψ). W(a, b) is the inner product of f with the scaled and translated versions of Ψ:W(a,b)=+f(x)ψa,b*(x)dx=<f

Design considerations

In this section we present the iterative “à trous” algorithm which can be easily implemented on a given image I. This discrete wavelet transform algorithm was introduced by Dutilleux [16], developed by Holdschneider [17] and detailed by Starck et al. [18]. The process gives an image sequence of coarser and coarser spatial resolution by performing successive convolutions with a low-pass filter h. At each iteration j, the spatial resolution of the approximation image appj−1 is degraded to give

Alternative “à trous” implementation and combination process

The method relies on the fact that PET and CT images are spatially co-registered, i.e. both images can be superimposed and are reconstructed with the same voxel size. The fusion display approach presented here aims at preserving relevant information provided by each modality: anatomical details and high-spatial resolution from CT on the one hand, and functional data from the PET on the other hand. For this purpose the anatomical details provided by the CT and corresponding to resolution levels

Status report

An example of CT image decomposition using the “à trous” algorithm (version using dyadic then linear transformation) is shown in Fig. 2. The reconstructed CT image (Fig. 2g) is very similar to the original one (Fig. 2a) since their voxel-to-voxel difference (Fig. 2h) has only zero values apart from a limited number of voxels. The quantitative measurements on Fig. 2h give mean value 1.9 × 10−10 ± 7.4 × 10−7, min value −1.5 × 10−5 and max value 1.5 × 10−5 (mean of the absolute values 5.4 × 10−6 ± 5.7 × 10−6)

Lessons learned

In this article a new approach to PET/CT image fusion has been proposed for whole-body imaging. Contrary to the great majority of existing methods, the aim of the presented work was to provide the user with a fused image preserving both anatomical and functional data. The objective is therefore different from simply presenting two images in a visually convenient fusion display in the sense that quantitative analysis is also here considered as a possible step. In the proposed methodology the

Future plans

Concerning the algorithm, it would be of great interest to define a local model instead of a global one in order to modify the detail layers of the CT. This improvement could lead to the elimination of artifacts corresponding to structures present in the CT but not in the PET. The results presented in this study were obtained using 2D calculation only. Indeed, most discrete wavelet transforms still perform in 2D and 3D implementations do not exist or are not well validated yet. However, a

Conflict of interest statement

None declared.

References (19)

There are more references available in the full text version of this article.

Cited by (25)

  • Color-appearance-model based fusion of gray and pseudo-color images for medical applications

    2014, Information Fusion
    Citation Excerpt :

    Frequency encoding methods are developed to facilitate the interpretation of one modality by merging with details from the other [19]. Much effort has been made to retain the original information as much as possible, assorting to the multi-resolution analysis [20–22]. In our earlier research, we compared the performance of the multi-resolution analysis and proposed fused rules for biomedical applications [23,24].

  • Fourier-wavelet restoration in PET/CT brain studies

    2012, Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment
    Citation Excerpt :

    The recent availability of combined PET–CT and PET–MRI scanning technology has increased the interest in such approaches, which use synergistically multimodal information to correct for PVE. Usually, these PVC methods rely on the availability of a segmented MRI image of the subject that is co-registered with the PET image [8,21–24] and they are prone to registration and segmentation errors. Such errors affect the precision of the recovery of the true radioactivity levels [4].

  • Multiscaled combination of MR and SPECT images in neuroimaging: A simplex method based variable-weight fusion

    2012, Computer Methods and Programs in Biomedicine
    Citation Excerpt :

    The transparency technique leads to darkened results due to the black background of functional images; the spatially interlacing loses original details aside from a low-contrast of the fused image; the temporally interlacing is a hardware-based dynamic method whose result is difficult to save for subsequent processing; the color space transform based methods have several problems, including the change of the color system, awful compromise between original images and the lack of interactive considerations; the segmentation based method faces the problems of threshold selection and sharp artificial edges in some applications. Except for the fusion for localization, some attempt to improve the definition of functional images [17–19] and others combine two structural images to obtain more comprehensive structural information [20,21]. Image fusion has also intrigued the research interest in remote sensing [22–27].

  • Biological image fusion using a NSCT based variable-weight method

    2011, Information Fusion
    Citation Excerpt :

    However, the difficulty exists in selecting the proper criteria consistent with the subjective assessment of the image quality. The root mean square error (RMSE), correlation coefficients (CC) and mutual information (MI) are traditionally used [4,5,13], but they have little resemblance to the visual effect of the image similarity. In the recent study [26], large-scale subjective experiments assess the visual information fidelity (VIF), a novel image similarity criterion, and proves it to be a good substitution for the subjective assessment.

View all citing articles on Scopus
View full text