Abstract
Reliable attenuation correction methods for quantitative emission CT (ECT) require accurate delineation of the body contour and often necessitate knowledge of internal anatomic structure. Two broad classes of methods have been used to calculate the attenuation map: transmissionless and transmission-based attenuation correction techniques. Whereas calculated attenuation correction belonging to the first class of methods is appropriate for brain studies, more adequate methods must be performed in clinical applications, where the attenuation coefficient distribution is not known a priori, and for areas of inhomogeneous attenuation such as the chest. Measured attenuation correction overcomes this problem and uses different approaches to determine this map, including transmission scanning, segmented magnetic resonance images, or appropriately scaled CT scans acquired either independently on separate or simultaneously on multimodality imaging systems. Combination of data acquired from different imagers suffers from the usual problems of working with multimodality images—namely, accurate coregistration from the different modalities and assignment of attenuation coefficients. A current trend in ECT is to use transmission scanning to reconstruct the attenuation map. Combined ECT/CT imaging is an interesting approach; however, it considerably complicates both the scanner design and the data acquisition and processing protocols. Moreover, the cost of such systems may be prohibitive for small nuclear medicine departments. A dramatic simplification could be made if the attenuation map could be obtained directly from the emission projections, without the use of a transmission scan. This is being investigated either using a statistical model of emission data or applying the consistency conditions that allow one to identify the operator of the problem and, thus, to reconstruct the attenuation map. This article presents the physical and methodologic basis of attenuation correction and summarizes recent developments in algorithms used to compute the attenuation map in ECT. Other potential applications are also discussed.
Radionuclide imaging, including SPECT and PET, relies on the tracer principle, in which a minute quantity of a radiopharmaceutical in introduced into the body to monitor the patient’s physiologic function. In a clinical environment, resulting radionuclide images are interpreted visually to assess the physiologic function of tissues, organs, and organ systems or can be evaluated quantitatively to measure biochemical and physiologic processes of importance in both research and clinical applications. Nuclear medicine relies on noninvasive measurements performed with external (rather than internal) radiation detectors in a way that does not allow the radionuclide measurement to be isolated from surrounding body tissues or cross-talk from radionuclide uptake in nontarget regions. Therefore, the image quality and quantitative accuracy of emission tomography reconstructions is degraded by several physical factors including (1,2): (a) the attenuation of the photons traveling toward the detector, (b) the detection of scattered as well as primary photons, (c) the finite spatial resolution of the imaging systems, (d) the limited number of counts one is able to collect when imaging patients, and (e) physiologic as well as patient motion. Whereas all of these effects limit practical quantitative emission tomography, important effects are the photon absorption in the object and the contribution in the images of events arising from photons scattered in the object (3,4). Both absorption and scattering are components of the general process of photon attenuation.
THE PROBLEM OF PHOTON ATTENUATION IN EMISSION TOMOGRAPHY
The physical basis of this phenomenon lies in the natural property that photons emitted by the radiopharmaceutical will interact with tissue and other materials as they pass through the body. For photon energies representative of those encountered in nuclear medicine (i.e., 68–80 keV for 201Tl to 511 keV for positron emitters), photons emitted by radiopharmaceuticals can undergo photoelectric interactions where the incident photon is completely absorbed. In other cases, the primary radionuclide photon interacts with loosely bound electrons in the surrounding material and are scattered. The trajectory of the scattered photon generally carries it in a different direction than that of the primary photon. However, the energy of the scattered photon can be lower than (in the case of incoherent scattering) or be the same as (in the case of coherent scattering) that of the incident photon. It is worth emphasizing that for soft tissue (the most important constituent of the body), a moderately low-Z material, we note 2 distinct regions of single interaction dominance: photoelectric below and incoherent above 20 keV. Moreover, the percentage of scattered events that undergo Compton interactions in the object is >99.7% at 511 keV for water, which renders the number of interactions by photoelectric absorption or coherent scattering negligible.
Mathematically, the magnitude of photon attenuation can be expressed by the exponential equation: Eq. 1 where Φo and Φ are the incident and transmitted photon fluences (in units of photons per unit area) and ds is a differential of the thickness of tissue encountered as the beam of photons passes through the body along path S. The parameter μ is the linear attenuation coefficient, which represents the probability that the photon will undergo an interaction while passing through a unit thickness of tissue. Therefore, the linear attenuation coefficient is a measure of the fraction of primary photons that interact while traversing an absorber and is expressed in units of inverse centimeters (cm−1).
Linear attenuation coefficients are often referred to as narrow beam or broad beam depending on whether or not the transmitted photon fluence includes scattered transmission photons. The build-up factor caused by the broad-beam conditions of nuclear medicine imaging is defined as the ratio of the transmitted photons divided by the value predicted from the ideal narrow-beam measurement, in which scatter is excluded from the transmitted beam. Therefore, the build-up factor is equal to 1 for narrow-beam geometry but it will increase with depth for broad-beam geometries until a plateau is reached. Figure 1 illustrates this concept with a simple single detector system measuring uncollimated (i.e., broad beam) and collimated (narrow beam) photon sources. Narrow-beam transmission measurements are ideally required for accurate attenuation correction in emission tomography. This, of course, is determined by the geometry of the transmission data acquisition system.
Furthermore, the situation for photon attenuation is different for PET than for SPECT. When a radionuclide distribution is measured in planar scintigraphy or in SPECT, the amount of attenuation depends on the tissue pathlength and the type of tissue (e.g., soft tissue, bone, vs. lung) that the photon encounters as it travels between the point of emission and the point of detection. When positron-emitting radiopharmaceuticals are used for the imaging study, the imaging system records 2 antiparallel 511-keV photons that are emitted after electron-positron annihilation. In this case, the annihilation photons traverse a total tissue thickness that is equal to the body thickness intersected by the line between the 2 detectors, also called the line of response (LOR).
Figure 2 shows the narrow-beam attenuation as a function of the source depth in water for representative photon energies for radionuclides encountered in nuclear medicine and for different attenuating media calculated using data from the XCOM photon cross-section library available from The National Institute of Standards and Technology (NIST), United States, through its Office of Standard Reference Data (5) and Report 44 of the International Commission on Radiological Units and Measurements (ICRU) (6). The data in Figure 2 show that attenuation of emission photons is severe for both γ-emitting and positron-emitting radionuclides (singles detection). These values also underscore the fact that photon attenuation is an unavoidable process, which can affect the diagnostic information that we gather from radionuclide imaging in a direct and profound way. Figure 3 shows typical reconstruction artifacts (depression of activity concentration in the center) resulting from the lack of attenuation correction for a uniform distribution of activity in a cylindric phantom. In a clinical setting, because the thickness of tissue varies for different regions of the patient’s anatomy, the magnitude of the error introduced by photon attenuation can also vary regionally in the radionuclide image. Therefore, a lesion located deep within the body will produce a signal that is attenuated to a greater degree than that for a superficial lesion. Similarly, a tissue region with uniform radionuclide content that lies below tissue having a variable thickness will generate an image with variable count density. This can occur in myocardial perfusion imaging when soft-tissue attenuation due to the diaphragm or breast tissue can cause false-positive defects. Reconstruction of tomographic images without attenuation can cause erroneously high-count densities and reduced image contrast in low-attenuation regions such as the lung. All of these effects can introduce artifacts into radionuclide images that can complicate visual interpretation and can cause profound accuracy errors when radionuclide images are evaluated quantitatively. For this reason, it is important to understand both the physical processes that underlie photon attenuation and the methods that can be used to correct radionuclide images for these physical factors. Attenuation correction in emission tomography is now widely accepted by the nuclear medicine community as vital for achieving the goal of producing artifact-free, quantitatively accurate data. Although this is no longer the subject of debate in cardiac SPECT (7–9), there are still some controversies regarding its usefulness in routine clinical PET oncology studies (10–12).
ATTENUATION CORRECTION STRATEGIES IN EMISSION TOMOGRAPHY
Figure 4 illustrates the various coordinates used in the derivation of the fundamental relation (2-dimensional [2D] central slice theorem) that links the imaged object ƒ(x,y) and corresponding attenuation map μ(x,y) to its 1-dimensional (1D) projection data p(s,φ). The general equation describing measured projections in term of the radionuclide source distribution inside an attenuating medium is called the attenuated radon transform and is given in the case of SPECT by: Eq. 2 where l(x,y) is the distance from the emission point (x,y) in the object to the detector along the line L(s,φ), φ is the angle between the rotating detector plane and the stationary reconstruction plane, and μ(x′,y′) is the attenuation coefficient at position (x′,y′). Whereas, in the case of PET, the attenuated radon transform is given by: Eq. 3 Ideally, one would like to solve the radon transform exactly to determine or reconstruct the radionuclide distribution f(x,y). However, because of the complexity of the equation, no exact analytic reconstruction method exists to invert the attenuated radon transform, especially for those encountered in medical imaging where the attenuation coefficient is not constant.
Various approximate methods have been proposed to solve the problem of reconstructing the image from the measured projection data. There are 2 major classes of image reconstruction algorithms used in emission tomography: direct analytic methods and iterative methods (Fig. 5). The classical analytic method for reconstructing images in both emission (i.e., radionuclide) and transmission (e.g., x-ray) tomography is filtered backprojection (FBP). With FBP, the radionuclide distribution ƒ(x,y) is reconstructed from the acquired projection data p(s,φ) in 2 steps: (a) filtering, in which the projections are filtered by the ramp filter; and (b) backprojection, in which the radionuclide value at each pixel (x,y) is computed from values contributed from the filtered projections. At present, in clinical practice, most image reconstructions are performed with analytic methods because they are relatively quick and easier to implement than iterative methods. However, the images produced by analytic algorithms tend to be streaky and display interference between regions of low and high tracer concentration.
Iterative reconstruction methods have been investigated for >20 y (13). There are, however, 3 major problems that prevented or delayed the use of maximum-likelihood expectation maximization (ML-EM) in commercially produced scanners: (a) the large memory requirements, which arise because of the size of the maximum likelihood matrix; (b) the computational complexity of a single iteration, which is similar to that of the FBP algorithm but also includes a forward projection operation; and (c) the lack of a good stopping criterion that can be used to decide whether the iterative algorithm should be stopped or continued. This is especially important because excess numbers of iterations can enhance noise in the reconstructed images if the noise is not controlled in an appropriate way. It is worth emphasizing that these concerns no longer exist with current generation personal computers in the GHz range. The stopping criteria remain a good academic problem although iterative algorithms used clinically are stopped after a fixed number of iterations and therefore are not run into convergence.
Like analytic techniques (e.g., FBP), iterative reconstruction methods include a backprojection step to estimate the radionuclide concentration from values contained in the projection data. Unlike analytic techniques, iterative methods also incorporate a forward projection operation, which estimates the projection data given that the radionuclide distribution in the image is known or can be estimated. The iterative algorithm applies the projection and backprojection operations multiple times to improve the accuracy of the pixel values in the radionuclide image. These functions are performed throughout the implementation of the iterative algorithm. Because the projection and backprojection steps are repeated multiple times in an iterative algorithm, their accurate and efficient computation is crucial to the accuracy, effectiveness, and speed of the overall reconstruction process.
There are several types of iterative reconstruction algorithms used for reconstructing SPECT and PET image data. The expectation maximization (EM) algorithm is applied in emission tomography as an iterative technique for computing maximum likelihood (ML) estimates of the radioactivity distribution (13). In this approach, the measured data are considered to be samples from a set of random variables whose probability density functions are related to the true radionuclide distribution in the object according to a mathematic model of the data acquisition process. Hudson and Larkin (14) presented an accelerated version of the EM algorithm based on an ordered sets approach. The ordered subsets EM (OSEM) algorithm processes the data in subsets (blocks) within each iteration in a way that accelerates convergence by a factor proportional to the number of subsets. It has been shown that OSEM produces images that are similar in quality to those produced by the EM algorithm in a fraction of the processing time.
Historically, nuclear medicine has been performed by obtaining the radionuclide projection data, to which one typically applied compensations for image degradations either before or after reconstruction. The tomographic emission images were then reconstructed using a conventional FBP algorithm. An attractive property of iterative reconstruction methods is that the algorithm can be modified to incorporate weights or penalties, which reflect the nature of the problem and characteristics of the data acquisition process and scanning system. Iterative reconstruction algorithms are commonly used to incorporate corrections for photon attenuation and also can be used to compensate for spatial resolution losses in both SPECT and PET. Additional constraints and penalty functions can be included to smooth the image for noise reduction purposes or to ensure that the image has other desirable properties. This enables algorithms to be tuned for specific clinical requirements. It has been shown that iterative methods can drastically improve the quality and quantitative accuracy of reconstructed images especially for data having poor statistics that can be encountered in oncologic or other similar studies. Increasingly, compensation for photon attenuation, scattered radiation, collimator response (in the case of SPECT), and other effects is performed by modeling these degradations into an iterative reconstruction method. This trend is likely to continue into the future, and these methods now can be used routinely in a clinical setting because many scanner manufacturers have recently upgraded their reconstruction software by implementing ML-EM-type algorithms.
Reliable attenuation correction methods for emission tomography require determination of an attenuation map, which represents the spatial distribution of linear attenuation coefficients for the region of the patient’s anatomy that is included in the radionuclide imaging study. After the attenuation map is generated, it can then be incorporated into the radionuclide reconstruction algorithm to correct the emission data for errors contributed by photon attenuation, scatter radiation, or other physical perturbations. The attenuation correction process can be applied (a) before reconstruction (e.g., using the geometric mean of opposed projections (15)), (b) after reconstruction (e.g., the Chang algorithm (16)), or (c) integrated within the transition matrix of an iterative reconstruction algorithm (e.g., (17)). The attenuation map contains information about the distribution of linear attenuation coefficients and accurately delineates the contours of structures in the body. The iterative reconstruction algorithm uses this information to calculate the attenuation to the boundary of each attenuating region for each pixel along the ray between the points of emission and detection in SPECT and the 2 detection points in PET, before the resulting values are summed to estimate the projected or backprojected pixel values. The natural approach for implementing attenuation and scatter models within an iterative reconstruction algorithm incorporates these effects in both the forward projection and the backprojection steps. Computational efficiency can be improved by including scatter only in the projection step (18). In this case, the transition matrix is considerably larger than is necessary if only attenuation and geometric factors are included and computation is therefore slow because scatter is essentially recalculated and added in each iteration. For each iteration, the estimated projection values are calculated and compared against the measured projection values. The comparison process is used to generate a correction term that is used to update the estimate of radionuclide concentrations in the image. This procedure iteratively improves the accuracy of the estimated images by modeling photon attenuation that is present in the measured projection data. If a registered patient-specific attenuation map is available, then nonhomogeneous attenuation can be easily implemented in the image reconstruction process.
The methods for generating the attenuation map generally can be described as falling within 2 main classes. The first class includes transmissionless correction methods based on assumed distribution and boundary of attenuation coefficients (calculated methods), statistical modeling for simultaneous estimation of attenuation and emission distributions or consistency conditions criteria. The second class includes correction methods based on transmission scanning including an external radionuclide source, CT, or segmented magnetic resonance (MRI) images. These methods vary in complexity, accuracy, and computation time required. To date, the most accurate attenuation correction techniques are based on measured transmission data acquired before (preinjection), during (simultaneous), or after (postinjection) the emission scan. Transmission-based attenuation correction has been traditionally performed in the case of PET, which started mainly as a research tool where there was greater emphasis on accurate quantitative measurements, whereas it only came more recently in the SPECT area. There are 2 simple reasons for that: (a) Attenuation correction in PET is easy because it requires a simple premultiplication of the measured emission data by the corresponding attenuation correction factors, and (b) the attenuation correction factors are huge and quantitation is impossible without attenuation compensation. Interestingly, the magnitude of the correction factors required in PET is far greater than in SPECT. For a given projection, the SPECT attenuation correction factors rarely exceed 10 in virtually all clinical imaging, whereas for PET, they often exceed 100 for some LORs through the body. Typically, the magnitude of the correction factors ranges from approximately 20 in PET and decreases in SPECT down to 9–10 for 201Tl (69–80 keV), 6–7 for 99mTc (140 keV) to nearly 3 for 18F (511 keV) (19).
The attenuation factor for a given LOR in PET depends on the total distance traveled by both annihilation photons ((a+b) in Fig. 6) and is independent of the emission point along this LOR. In comparison, SPECT models attenuation processes in which the emitted photon traverses only part of the patient’s anatomy before reaching the detector. Figure 6 shows a transmission image along with the attenuation paths for both single-photon and coincidence detection modes. Therefore, correction for attenuation is only approximate in SPECT, whereas it is more exact, more accurate, and limited only by the statistics of the acquired transmission data in PET. In addition, the problem of photon attenuation in SPECT has proven to be more difficult to solve than for PET, and several types of correction methods have been suggested (1,3). Nevertheless, recent iterative algorithms converge to a very accurate solution even in SPECT.
TRANSMISSIONLESS METHODS
In some cases, attenuation maps can be generated without adding a separate transmission scan to the emission acquisition. Algorithms in this class of methods either assume a known body contour in which a (uniform) distribution of attenuation coefficients is assigned or try to derive the attenuation map directly from the measured emission data. Only methods widely used in clinical routine and implemented on commercial systems will be described in this section. Other sophisticated and computer-intensive approaches exist for generating an attenuation map without a separate transmission measurement. These include methods that apply consistency conditions and statistical modeling for simultaneous estimation of emission and transmission distributions, which will only be described on a conceptual basis in this article.
Calculated Methods
The assumption of uniform attenuation is straightforward in imaging the brain and abdominal areas where soft tissues are the dominant constituent, as opposed to regions such as the thorax, which is more heterogeneous. In these regions, if the body contour can be determined from the emission data, then the region within the contour can be assigned a uniform linear attenuation coefficient value corresponding to that of water or soft tissue to generate the corresponding attenuation map. The body contour can be determined either manually or with automatic edge-detection methods.
Manual Contour Delineation.
The simplest manual method consists of approximating the object outline by an ellipse drawn around the edges of the object. Uniform attenuation is then assigned within the contour to generate the attenuation map. An irregular contour can also be drawn manually by an experienced technologist. The method is generally only appropriate for brain studies and is implemented on approximately all commercial SPECT and PET systems (20). Although empiric, the method has some attractive properties: it is relatively quick, easy to use, and increases patient throughput, which is a relevant issue in a busy clinical department.
Automatic Edge-Detection Methods.
A variation of the calculated attenuation correction is an automated technique that traces the edge of the object in projection space using an appropriate edge-detection algorithm. This allows the attenuation map to form any convex shape with the advantage that automated edge detection reduces the burden on the operator. In addition, lung regions can sometimes be delineated from the emission data, in which case a more accurate attenuation map can be defined. Algorithms proposed for estimating the patient contour include those that define the contour (a) on the basis of the acquisition of additional data in the Compton scatter window (21–23), (b) directly from the photopeak data only (24–27), or (c) by segmentation of the body and lung regions either by an external wrap soaked in 99mTc (28) or using both scatter and photopeak window emission images (29). Other methods use a set of standard outline images (30) to define the shape of the attenuation map. Assigning known attenuation coefficients to the soft tissue and lung regions then forms the attenuation map. Because it is generally difficult to define the patient contour from emission data alone without the use of transmission data, transmissionless techniques have had limited clinical application using these methods.
In the case of brain imaging, automated methods also allow for a certain thickness of higher attenuation material to be added to the calculation to account for the skull. More recently, an automated method was proposed to compute a 3-component attenuation map for brain PET imaging (31). The technique generates an estimated skull image by FBP of the reciprocal of an emission sinogram. The thickness and radius of the skull are then estimated from profiles extracted from the image. The resulting thickness and radius values are then used to generate a model of the brain, skull, and scalp. Appropriate linear attenuation coefficients are then assigned to estimate the attenuation map for the head. More refined methods make use of an optical tracking system to derive 3-dimensional (3D) patient-specific head contour (32). A previously acquired reference attenuation map is then transformed to match the contour of the reference head with the target head using the thin plate spline technique. A practical advantage of the optical tracking system is that it can also be used for motion correction.
It is generally well accepted that transmission-based nonuniform attenuation correction can supply more accurate attenuation maps than transmissionless techniques. However, it is not entirely clear whether nonuniform attenuation maps provide specific benefits in the routine clinical practice of tomographic brain imaging. Comparisons made by independent observers have shown no significant differences in subjective quality between images reconstructed with uniform and nonuniform attenuation maps (33). Hooper et al. (34) have shown using clinical PET data that calculated attenuation correction (24) gave rise to appreciable bias in structures near thick bone or sinuses when compared with the clinical gold standard (transmission-based attenuation correction). Licho et al. (35) reported that uniform attenuation-corrected studies provided unreliable regional estimates of tracer activity. This study reported estimation of the attenuation map from a segmented reconstruction of a lower energy Compton scatter window image as the next most accurate clinical method, which can be used reliably when transmission scanning cannot be used. In contrast, semiquantitative analysis of images reconstructed using transmissionless attenuation maps produces results that are very similar in 99mTc-ethylcysteinate dimer uptake values for healthy volunteers compared with those obtained with a transmission-based method (36). However, special attention should be paid to the choice of the optimal effective broad-beam attenuation coefficient (μeff) to use when combining attenuation and scatter corrections (37) for reconstruction of emission data that may have been perturbed by scatter and attenuation in the human skull. The attenuation of the skull has been evaluated by many investigators (36,38,39), all suggesting the use of a lower value of μeff than for a uniform soft-tissue medium. The choice of the optimal value of the linear attenuation coefficient was reported in an elegant article by Kemp et al. (38), in which the use of effective bone and tissue attenuation coefficients to compensate 99mTc-hexamethylpropyleneamine oxime brain SPECT resulted in images of improved uniformity and increased count density. In another study using an anthropomorphic phantom, the best choice of the effective linear attenuation coefficient was found to be slice dependent and reliant on the skull thickness and the methods used for attenuation and scatter corrections (40). Van Laere et al. (41) used an attenuation coefficient of 0.105 cm−1 determined from experimental studies using the 3D Hoffman brain phantom and 0.09 cm−1 for clinical studies (36), indicating that results obtained on phantom studies cannot be extrapolated directly for application on human data. The deviation from the theoretic value of 0.15 cm−1 for 99mTc may, in all cases, be explained by nonoptimal scatter corrections.
Statistical Modeling for Simultaneous Reconstruction of Transmission and Emission Distributions
Another approach, which is receiving considerable attention, is to compute the attenuation map directly from the emission data, eliminating the transmission scan from the acquisition protocol. The problem of transmissionless image reconstruction in emission tomography has a long history, starting with the pioneering work by Censor et al. (42), in which alternating iterations of the reconstruction algorithm were used to reconstruct emission tomograms and attenuation maps from a set of emission projections alone. Many researchers, who applied various optimization techniques, also have used similar philosophies in generating emission tomograms and attenuation maps. For instance, Nuyts et al. (43) formulated the problem as an optimization task in which the objective function is a combination of the likelihood and an a priori probability. The latter uses a Gibbs prior distribution to encourage local smoothness and a multimodal distribution for the attenuation coefficients. Other methods included the use of the EM algorithm, as was done by Krol et al. (44), or penalty functions (45,46). The techniques have had limited success but often produce artifacts in the form of cross-talk between the emission image and the attenuation map.
Methods Based on Consistency Conditions Criteria
Continuous Consistency Conditions.
Other transmissionless reconstruction methods attempt to avoid cross-talk between the emission image and attenuation map by reconstructing the emission image and the attenuation map independently. In SPECT, the simplest method of doing this is to apply an approximate linear relation, which exists between the attenuation map and the emission data measured at opposite detector positions (47). However, this approximation assumes that the object has relatively low attenuation, which restricts possible applications. A more general technique applies the consistency conditions for the range of the attenuated radon transform to obtain the attenuation map from SPECT data (48). The first consistency condition is that in an exact radon transform, the total counts in each projection are equal. Higher order consistency conditions relate to the equality of higher order moments of the projections. Essentially, the consistency conditions are given in the form of a functional that equals zero on the range of the operator of the problem. This enables one to identify the operator, providing that a function from its range is available. In particular, the attenuation map, which is a parameter of the imaging operator, can be found. An advantage of such an approach is that no information about the unknown activity image is required and no attempt to reconstruct it is made. Certain difficulties appear if the data measured are not in the range of the operator identified because of errors introduced by noise, discretization errors, and other physical factors. The problem also is ill posed, causing instability in the solution unless regularization procedures are applied. In some cases, reducing the dimensions or the complexity of the attenuation map can reduce the complexity of the problem. As has been shown recently, this enables one to find the uniform elliptic attenuation distribution, which is most consistent with the emission projections (48–51).
Discrete Consistency Conditions.
Implementations based on previously used continuous conditions have shown that reconstructions did not converge to an acceptable solution (52). Bronnikov (53) recently suggested an original approach that strengthens the paradigm of the consistency conditions by setting them in the framework of a discrete representation of the problem. Such an approach ensures a natural regularization of the problem, allowing one to use the well-known method of Tikhonov regularization. Moreover, one of the main advantages of this method is that it can easily be applied in various scanning configurations, including fully 3D data acquisition protocols (54). This method has a stable numeric implementation and can avoid cross-talk between the attenuation map and the source distribution. A computationally efficient algorithm was implemented by using the QR and Cholesky decompositions.
TRANSMISSION METHODS
In clinical and research applications, in which the attenuation coefficient distribution is not known a priori, and for areas of inhomogeneous attenuation such as the chest, more adequate methods must be performed to generate the attenuation map. This includes transmission scanning (19), segmented MRI data (55,56), or appropriately scaled CT scans acquired either independently on separate (57–60) or simultaneously on multimodality imaging systems (61–64). These methods are described in more detail in the following sections.
Radionuclide Transmission Scanning
As reported by Bailey (19), the use of transmission scanning using an external radionuclide source dates back to the pioneering work of Mayneord (65) in the 1950s. A more refined approach for acquiring transmission data for use in conjunction with conventional emission scanning was implemented by Kuhl et al. (66) in 1966. Transmission scanning is commonly available on commercial SPECT and PET systems, allowing it to be performed in clinical departments on a routine basis, especially when it is combined with simultaneous emission scanning. In a clinical environment, the most widely used attenuation correction techniques use transmission data acquired before (preinjection) (33,67), during (simultaneous) (68,69), or after (postinjection) (34,70) the emission scan. Interleaving emission and transmission scanning has proven to be very practical in oncology studies in which multiple bed positions are needed. Sequential emission-transmission scanning is technically easier to perform than simultaneous scanning, but it increases the imaging time and suffers from image registration problems caused by patient misalignment or motion. Simultaneous acquisition requires no additional time for the emission and transmission measurements, which is important for routine clinical studies. However, errors may be introduced due to cross-talk between the transmission and emission data. It has been shown that the attenuation coefficients and activity concentrations are not significantly different when estimated with sequential and simultaneous emission transmission imaging (68). Because the reconstructed attenuation coefficients are energy dependent, the reconstructed attenuation coefficients are transformed to the coefficients of the appropriate isotope energy using suitable techniques. The accuracy of the transmission and emission maps produced using different transmission-emission source combinations has been the subject of a long debate (41,71). In addition, various approaches have been proposed to eliminate contamination of emission data by transmission photons and to reduce spillover of emission data into the transmission energy window (71–73). Several transmission scanning geometries have emerged for clinical implementation for SPECT (19,74), hybrid SPECT/PET (75), and dedicated PET (19,76), as illustrated in Figures 7-9. The following sections describe the different transmission sources and data acquisition geometries that have been proposed so far.
SPECT.
Radionuclide transmission-based methods in SPECT include both sequential and simultaneous scanning using external 57Co, 99mTc, 133Ba, 139Ce, 153Gd, 201Tl, or 241Am sources. Early designs of transmission systems for SPECT cameras used uncollimated flood or sheet sources. The main advantage of this configuration is that the source fully irradiates the opposite head and, therefore, requires no motion of the source other than that provided by the rotation of the camera gantry. These geometries also have drawbacks associated with the high proportion of scattered photons in the transmission data due to the broad-beam imaging conditions. As a result, the attenuation map estimates an effective linear attenuation coefficient rather than the value that would be calculated from narrow-beam geometry. This difficulty can be overcome in part by collimating the transmission source (77) to produce a geometry that more accurately represents a narrow-beam transmission geometry.
Traditionally, SPECT systems used either 99mTc or 201Tl transmission sources that produce accurate attenuation maps for these respective emission radionuclides (41,78). More recently, transmission data acquired with 153Gd or 133Ba external sources or with a rotating x-ray tube have been used to compute the attenuation map. The main radionuclide-based configurations include (a) a stationary line source fixed at the collimator’s focus with convergent collimation on a triple-detector system (68), (b) scanning line sources with parallel-hole collimation (79), (c) a multiple line source array with parallel-hole collimation (80), (d) scanning point sources using either fanbeam and offset fanbeam geometry (81), or (e) asymmetric fanbeam geometry acquired by using a high-energy source that emits transmission photons capable of penetrating the septa of a parallel-hole collimator (82). Published reviews can be consulted for a more detailed survey of these geometries (9,19,74). The most widely implemented configuration for commercial transmission acquisition is the scanning line source geometry. However, each configuration has its unique advantages and drawbacks, and camera manufacturers are still optimizing the apparatus used to acquire the transmission data.
Hybrid SPECT/PET.
Several schemes have been proposed to perform transmission scanning on coincidence gamma-camera systems (75). Laymon et al. (83) developed a 137Cs-based transmission system for a dual-head coincidence camera that can be used for transmission scanning after injection. Commercial SPECT/PET systems use single-photon-emitting 133Ba (t1/2 = 10.5 y; Eγ = 356 keV) point sources (82) or 153Gd (t1/2 = 240 d; Eγ = 97–101 keV) line sources (80). As noted above, 153Gd is commonly used as a transmission source for SPECT. In addition, transmission data obtained using 153Gd can be scaled to provide an attenuation map for coincidence imaging of positron emitters. In comparison, 133Ba has a long half-life and does not have to be replaced as do shorter-lived radionuclide transmission sources such as 153Gd and 57Co, 99mTc, or 201Tl. Furthermore, 133Ba has the potential advantage that it is a radionuclide source with a photon energy (356 keV) that is between those encountered for emission imaging in PET (511 keV) and in SPECT (e.g., 201Tl, 99mTc, 111In, 123I, 131I). Therefore, it may be more suitable for obtaining transmission data for SPECT than annihilation photons from a positron source used with PET but may be more difficult to shield than other single-photon transmission sources commonly used with SPECT. It is important to note that any potential advantages or disadvantages of 133Ba as a transmission source for hybrid SPECT/PET cameras have not been demonstrated. Finally, it also is possible to use an x-ray tube as a transmission source in hybrid SPECT/PET systems, such as that implemented with the Discovery VH dual-head camera (General Electric Medical Systems, Milwaukee, WI). The use of an x-ray tube offers the advantages of higher photon fluence rates and faster transmission scans, with anatomic imaging and localization capability that cannot be obtained using radionuclide transmission sources. However, the use of the x-ray tube also requires a separate x-ray detector because its photon fluence rate far exceeds the counting rate capabilities of current scintillation camera technology, and as a point source its use is not compatible with parallel-hole collimators required for SPECT.
PET.
The early PET scanners used transmission ring sources of the positron-emitting radionuclides 68Ga/68Ge (t1/2 = 68 min and 270.8 d, respectively), which coexist in secular equilibrium. In this case, annihilation photons are acquired in coincidence mode between the detector adjacent to the annihilation event and the detector in the opposing fan, which records the second annihilation photon after it has passed through the patient. This design was modified later by replacing the ring sources radially by continuously rotating rod sources. Obviously, the detector block close to the rod source receives a high photon flux rate, causing detector dead time to be a major limitation in this approach (19). This problem can be relieved by windowing the transmission data so that only events collinear with the known location of the rod are accepted. Scanner manufacturers have adopted this as a standard approach for several years. More recently, some manufacturers have implemented transmission scanning using single-photon sources such as 137Cs (t1/2 = 30.2 y; Eγ = 662 keV). Transmission data recorded with an external single-photon source can be recorded at higher counting rates resulting from the decreased detector dead time. In addition, a 137Cs transmission source produces a higher energy photon and, therefore, improves object penetration (76) in comparison with a positron-emitting transmission source. The 662-keV photons from 137Cs are less attenuated than the annihilation photons from the PET emission source. The attenuation map generated from transmission data acquired with a 137Cs transmission source must be corrected to account for differences in photon attenuation between the emission and transmission data.
In recent volumetric PET systems such as the ECAT ART (CTI PET Systems, Knoxville, TN) that operate exclusively in 3D mode, attenuation correction factors are measured with 2 single-photon collimated point sources of 137Cs capable of producing high-quality scatter-free data with this continuously rotating partial-ring PET tomograph (84). This allows transmission data to be acquired with improved counting statistics while drastically diminishing the acquisition time. This scanner is designed around single-photon transmission sources having 2 sets of 12 slits with an aperture ratio of 15:1 and an axial pitch equal to twice the pitch of the axial crystal ring (67). A simple mechanism has been devised to produce a coincidence event between the detector, which records the transmitted single photon, and the detector in the opposing fan near the current location of the single-photon source. More recently, a simultaneous emission-transmission scanning system has been developed that reduces contamination of the emission data by the emitted transmission photons using a fast, dedicated, lutetium oxyorthosilicate (LSO)-based reference detector placed close to the collimated coincidence point source used to produce the transmission data (69).
Segmentation of Transmission Data.
Noise from the transmission scan will propagate through the reconstruction process, affecting the quality of the reconstructed images. To minimize this effect, long transmission scans are normally acquired to ensure good statistics at the expense of patient throughput, especially in the case of whole-body scanning with low-sensitivity tomographic systems. Alternatively, image segmentation can be applied to delineate different anatomic regions (e.g., lung vs. soft tissue) in the attenuation map. The known attenuation coefficients of these tissues then can be applied to the segmented regions to minimize noise in the resulting attenuation map, with the goal of reducing noise in the associated attenuation-corrected emission tomogram. During the last decade, techniques using transmission image segmentation and tissue classification have been proposed to minimize the acquisition time (<3 min) and increase the accuracy of the attenuation correction process, while preserving or even reducing the noise level. The reconstructed transmission image pixels are segmented into populations of uniform attenuation. The classified transmission images are then forward projected to generate new transmission sinograms to be used for attenuation correction of the corresponding emission data. This reduces the noise on the correction maps while still correcting for specific areas of differing attenuation such as the lungs, soft tissue, and bone.
In a clinical setting, segmentation algorithms must be designed to balance image quality and computational time of the emission tomograms. The majority of segmentation methods used for attenuation correction fall into 1 of the following 2 classes: histogram-based thresholding techniques (85,86) and fuzzy clustering-based segmentation techniques (87,88). Threshold approaches use the gray-level histogram counts to distinguish between regions. However, if the geometry of the attenuation map is based solely on the characteristics of the histogram, the technique is most likely to fail in regions where the total number of counts is small (e.g., the skull). Therefore, the performance of these techniques strongly depends on the choice of the thresholds. In comparison, fuzzy clustering-based segmentation techniques have demonstrated excellent performance and produced good results as an automated, unsupervised tool for segmenting noisy images in a robust manner. They are iterative procedures that minimize an objective function. As an output, a membership degree is assigned to every voxel with respect to a cluster center. The number of clusters is generally passed as an input parameter. To automate the process, a cluster validity index can be used to select the optimal number of clusters (89). A representative slice of a clinical study at the level of the thorax is shown in Figure 10, illustrating the original reconstructed and segmented images after the labeling process and, finally, after assigning the tissue-dependent attenuation coefficients using weighted averaging. The improvement in image quality of emission data reconstructions when using fuzzy clustering-based segmented attenuation correction compared with measured attenuation correction is further illustrated on coronal slices of a patient study in Figure 11.
Other interesting approaches to segment noisy transmission data include the use of active contour models (90), neural networks (91), morphologic segmentation (92), and hidden Markov modeling (93). An alternative to segmentation of transmission images with the goal of reducing noise in PET transmission measurements includes Bayesian image reconstruction (94) and nonlinear filtering (95).
X-Ray Transmission Scanning
Attenuation maps generated for attenuation correction of the radionuclide image have been traditionally obtained using an external radionuclide source. This process is identical conceptually to the process of generating a CT image with an x-ray tube that transmits radiation through the body, with the transmitted intensity recorded by an array of detector elements. The transmission data can then be reconstructed using a tomographic algorithm that inherently calculates the attenuation coefficient at each point in the reconstructed slice. In clinical use, the CT image is represented in terms of normalized CT numbers or Hounsfield units, named after Godfrey Hounsfield, one of the early pioneers of CT. Nevertheless, the CT image contains pixel values that are related to the linear attenuation coefficient (μ) at that point in the patient, calculated for the mean energy of the x-ray photons used to generate the CT image.
Therefore, it is not surprising that CT can generate a patient-specific attenuation map for correcting the radionuclide image (either SPECT or PET) for photon attenuation. In a clinical setting, this can be performed by scanning the patient in the CT scanner and then moving the patient to a PET or SPECT system for acquisition of the radionuclide data. A challenging problem occurs because the x-ray and radionuclide data must be registered spatially with the CT-derived attenuation map for reconstruction. The image registration can be performed by acquiring both the x-ray and the radionuclide images of the patient with fiducial markers that can be identified to align the images using off-line image registration software (96). These image registration techniques have been relatively successful for imaging the head, and to a lesser degree for imaging the pelvis, both of which are constrained geometrically by the skeleton. However, off-line image registration is more difficult to perform accurately in the thorax, abdomen, and head and neck regions where the body can flex and bend, making it difficult to maintain a consistent anatomic configuration when the x-ray and radionuclide data are acquired during separate imaging sessions on different scanners.
Recently, dual-modality imaging systems have been developed that incorporate radionuclide imaging (PET or SPECT) with CT in a single system (61–64). The dual-modality systems have an integrated patient table that allows both the x-ray and the radionuclide images to be acquired without removing the patient from the system. This allows the x-ray and radionuclide images to be acquired with a consistent patient geometry and with a minimal level of patient movement, thereby facilitating the coregistration of the CT image with the PET or SPECT image. The ability of the dual-modality imaging systems to facilitate image fusion is seen as an important advance, especially for anatomic localization of radiopharmaceutical uptake for tumor staging and for treatment planning in oncologic studies. In addition, the CT image obtained with a dual-modality system can be used to generate a patient-specific attenuation map for attenuation correction of the radionuclide data. Figure 12 shows an example of SPECT/CT image fusion, revealing localization of 131I-metaiodobenzylguanidine (131I-MIBG) in an involved lymph node in a patient’s left axilla together with correlated slices through a central region for 3 reconstruction methods: FBP, ML-EM, and ML-EM with collimator compensation using a distance-dependent model in an attempt to recover resolution loss due to collimator blurring.
As noted above, CT inherently provides a patient-specific measurement of the linear attenuation coefficient at each point in the image. However, the linear attenuation coefficient measured with CT is calculated at the x-ray energy rather than at the energy of the photon emitted by the radiopharmaceutical acquired during the radionuclide imaging study. It is therefore necessary to convert the linear attenuation coefficients obtained from the CT scan to those corresponding to the energy of the emission photons used for the radionuclide imaging study (97). Researchers developing PET/CT and SPECT/CT systems have developed similar methods for calibrating the CT image for attenuation correction of the emission data.
PET.
Kinahan et al. (59) have implemented a technique using CT for attenuation correction of PET scans. The basis for their technique lies in the observation that Compton scattering is the dominant interaction at the 511-keV photon energy of PET agents. However, both the photoelectric effect and the Compton scattering contribute to the photon attenuation in biologic tissues at the mean energy of the x-ray beam (approximately 80 keV). The energy dependence of the attenuation coefficient is quantified by calculating scaling factors that can be used to convert the linear attenuation coefficient at the x-ray energy to the 511-keV energy of annihilation photons. The photoelectric effect and Compton scattering have different contributions in bone versus soft tissue at typical x-ray energies used to acquire the CT image. Correspondingly, different scaling factors are used for soft-tissue regions in comparison with bony regions to convert the CT image to an attenuation map at 511 keV. The scaling factor represents the linear attenuation coefficient of water (or bone) at 511 keV divided by that at the CT energy. Each CT slice is segmented by thresholding to delineate regions corresponding to soft tissue (including lung tissue) and to bone. The pixel values in each region of the CT image are then multiplied by the corresponding scaling factor to calculate attenuation coefficients at the 511-keV energy of the PET agent. The resulting attenuation map can then be incorporated into the reconstruction algorithm to correct the radionuclide data for photon attenuation. Kinahan et al. (59) have demonstrated that this technique produces accurate radionuclide values obtained with a combined PET/CT scanner.
More recently, PET attenuation maps generated using 68Ge transmission scans and resolution-matched CT images were compared in 14 patients (98). The CT-derived attenuation map is based on the transformation implemented in the Discovery LS PET/CT scanner (General Electric Medical Systems), which uses a bilinear function based on the narrow-beam attenuation of water and cortical bone at the CT (80 keV) and PET (511 keV) energies. The bias between the predicted attenuation coefficients for soft tissue was reduced by replacing in the transformation the theoretic coefficient at 511 keV (0.096 cm−1) by the experimental value obtained from transmission scans (0.093 cm−1).
SPECT.
One method for calculating an attenuation map for SPECT using CT data has been developed by Nickoloff et al. (60) to experimentally determine the effective energy of a particular CT scanner. It has been shown, however, that the method is too simplistic if an iodine contrast agent is used in the CT scan and the SPECT image is obtained immediately after the CT scan. In this case, the iodinated regions and the bone regions in the image should be separated and scaled differently because these 2 regions are characterized by different slopes defining the linear relationship between the linear attenuation coefficient versus the CT number (99). For example, results presented by Tang et al. (99) from a calibration experiment for their scanner at 140 kVp show that the slopes differ by a factor of 10 when converting from the CT number to the linear attenuation coefficients for the 364-keV γ-rays of 131I. Their final relationship is also piecewise linear, with a change in slope at the CT number corresponding to water (i.e., where the Hounsfield number equals zero). Moreover, the slope for the higher CT number values is larger if the region contains bone than if it contains soft tissue and iodinated contrast media.
Another method of generating a patient-specific attenuation map from correlated CT scans was developed by Blankespoor et al. (97) and has been confirmed by LaCroix et al. (100) using computer simulation. The calibration method is based on techniques developed for bone-mineral density studies with quantitative CT. The calibration procedure begins by acquiring CT scans from a phantom containing tissue-equivalent calibration materials having known chemical compositions. CT scans of the calibration phantom are acquired. Regions of interest (ROIs) are then defined for each compartment containing a calibration material, allowing the user to determine the mean CT number produced for each specific material. A calibration curve is then generated in which the measured CT number is plotted against the known attenuation coefficient at the photon energy of the radionuclide used in the emission study. The resulting calibration curve is piecewise linear and covers the range of linear attenuation coefficients commonly encountered in the body. CT values below that of soft tissue (i.e., water) have a slope corresponding to mixtures of soft tissue and air (e.g., including those encountered in lung), whereas those having attenuation coefficients above water have a slope corresponding to mixtures of soft tissue and bone. The resulting calibration curve can then be used to convert CT values obtained from patient scans to their equivalent linear attenuation coefficients for the desired radionuclide photon energy.
Both of the methods described above use CT scans acquired at a specific accelerating voltage of the x-ray tube. Therefore, calibration factors must be calculated separately for each kVp setting at which the CT scanner is operated. In addition, the x-ray beam used in CT is polyenergetic rather than monoenergetic as is true for radionuclide photons. The polyenergetic nature of CT makes it subject to beam-hardening artifacts caused by the preferential absorption of lower energy photons as they pass through the patient’s body. Therefore, the mean energy of the x-ray beam is higher in thick patient regions than in thin patient regions. Correspondingly, the linear attenuation coefficient calculated for thick body regions is lower than that in thin regions. This can cause “cupping” artifacts that are generally corrected in the calibration software implemented as part of the standard CT reconstruction software of commercial scanners.
It is possible to correct for cupping artifacts and other quantitative errors in CT by using dual-energy x-ray methods in which 2 CT scans are acquired from the same regions at different x-ray energies. The 2 CT scans can then be combined to generate accurate attenuation coefficients at any desired photon energy. Dual-energy imaging methods theoretically improve accuracy in comparison with conventional CT. However, these methods rely on subtractive techniques and result in a higher patient radiation dose. Overall, accurate photon attenuation can be achieved with attenuation maps generated with conventional CT techniques, and the additional radiation dose that is required for dual-energy x-ray techniques probably does not justify the slight improvement in accuracy that they offer. However, Guy et al. (101) have developed a dual-energy method in which CT scans from alternating slices of the patient are acquired at different energies. The data can then be interpolated and combined to obtain accurate attenuation values without the additional radiation dose delivered to the patient when 2 complete sets of CT scans are acquired at different x-ray energies.
The development of calibration methods that convert the CT scan to units of linear attenuation coefficients at the energy of the radionuclide photon allows a single CT scan to be used for correcting a variety of different radiopharmaceuticals. Calibration techniques using CT have been demonstrated for generating attenuation maps for attenuation correction of radionuclide images acquired using 99mTc, 131I, and 18F. In addition, CT can be used to generate a map for radionuclides such as 111In that emit 2 photons having different energies in a single decay. Wong et al. (102) have developed a method in which the CT image is calibrated for the mean value of the actual photon energies from the radionuclide. Therefore, the attenuation map and, hence, the reconstruction of the radionuclide image with attenuation correction can be performed on the pooled radionuclide data rather than requiring separate reconstructions for each photon energy of the radionuclide.
There are, of course, other issues that must be considered in using CT to generate attenuation maps for correction of the radionuclide data. First, CT fundamentally has a higher spatial resolution and is reconstructed in a finer image matrix than either PET or SPECT. Typically, the 512 × 512 CT image can be down-sampled or averaged to the same image format (e.g., 64 × 64, 128 × 128, 256 × 256) as that used for reconstruction of the radionuclide tomogram. In the case of the Discovery VH SPECT/CT scanner from General Electric, the CT scanner is designed with a lower spatial resolution than conventional CT scanners in a way that is suitable for attenuation correction and anatomic mapping of the radionuclide data. Finally, it is worth noting that CT offers practical advantages over transmission imaging with a radionuclide source for generating an attenuation map. First, the photon fluence rate from the x-ray tube is several orders of magnitude higher than can be obtained from a radionuclide transmission source. This allows the transmission data to be acquired faster and with a higher statistical quality than transmission data from radionuclide transmission scanning. Second, the higher fluence rate from the x-ray tube also allows the transmission data to be acquired after the patient is injected with the radiopharmaceutical without errors caused by cross-contamination by emission photons that can occur when the transmission data are acquired using an external radionuclide source. Finally, the x-ray source does not decay, is more stable, does not need frequent replacement, and produces transmission data of higher quality than that acquired from radionuclide transmission sources. Arguably, these factors make transmission imaging with an x-ray source less expensive and more efficient than that acquired with external radionuclide sources.
Segmented MRI
Although many PET/MR image coregistration and fusion algorithms have been described in the literature (96), very few studies have addressed the issue of using segmented MR data to construct an attenuation map for attenuation correction purposes in emission tomography (56). For imaging the brain, the simplest method segments the MR image by thresholding to create a mask, which delineates the skull and all other tissues but excludes the hollow space of sinus and so forth. Every voxel in the mask is then assigned the attenuation coefficient of water (0.096 cm−1). A more robust approach based on registered 3D MRI T1 images has been proposed recently (55). These images were realigned to preliminary reconstructions of PET data and then segmented using a fuzzy clustering technique by identifying tissues of significantly different density and composition. The voxels belonging to different regions can be classified into bone, brain tissue, and sinus cavities. These voxels were then assigned theoretic tissue-dependent attenuation coefficients as indicated in ICRU Report 44 (6). An equivalent method substituting the patient-specific MR images with a coregistered digitized head atlas derived from a high-resolution MRI-based voxel head model (103) called inferring-attenuation distributions (IAD) has been proposed by Stodilka et al. (104) for brain SPECT and extended later for brain PET (105).
The availability of public domain image registration and segmentation software dedicated for brain (e.g., Statistical Parametric Mapping package [Wellcome Department of Cognitive Neurology, University College London, London, U.K.]) facilitates clinical implementation of MR-based attenuation corrections. The recent interest in simultaneous multimodality PET/MRI may motivate future applications of this method to other organs where accurate registration is more difficult to achieve. For example, a prototype small animal PET scanner has been designed with LSO detector blocks of 3.8-cm ring diameter coupled to 3 multichannel photomultiplier tubes via optical fibers (106) so that the PET detector can be operated within a conventional MRI system. The authors reported no appreciable artifacts caused by the scintillators in the MR images. A second larger (11.2 cm) prototype is being developed for simultaneous PET/MRI of mice and rats at different magnetic field strengths (107). Although the challenges and costs for these devices are substantial, the future potential of multimodality imaging appears to be bright.
A clinical brain PET study reconstructed without attenuation correction is shown in Figure 13. The uncorrected image tends to depress the reconstructed activity at the center of the brain. Figure 14 illustrates reconstructed functional brain PET images of the same study corrected for attenuation using both uniform attenuation maps based on manual and automatic contours and nonuniform attenuation maps, including transmission scanning (33), segmented transmission (86), coregistered segmented MRI (55), and the IAD method (104). This latter method was implemented as described by Stodilka et al. (104) without any modifications (e.g., adding the bed to the final images) (105). From a purely qualitative analysis, the merits of the more exact methods based on realistic nonuniform attenuation maps are obvious. They produce less visible artifacts, whereas the approximate methods tend to produce an artifact in which there is a high level of activity along the edge of the image due to overestimation of the head contour when using the automatic edge-detection method on the external slices. On the other hand, the quantitative volume-of-interest (VOI)-based analysis of 10 patient datasets revealed different performance and statistically significant differences between the different correction techniques when compared with the gold standard (transmission scanning) (105).
FURTHER APPLICATIONS OF ATTENUATION MAP
The primary motivation for determining the attenuation map in emission tomography has been to perform accurate attenuation correction. However, the attenuation data also can be useful for many other tasks, including transmission-based scatter modeling, motion detection and correction, introducing a priori anatomic information into reconstruction of the emission data, partial-volume correction, and image fusion of anatomic and functional data to facilitate ROI definition.
Scatter Correction
Traditionally, scatter correction techniques in emission tomography have estimated scatter distributions from measurements using energy windows placed adjacent to the photopeak window used to acquire the primary emission data. However, the expanding diagnostic and therapeutic applications of quantitative emission tomography have motivated the development of scatter correction techniques, which incorporate patient-specific attenuation maps and the physics of interaction and detection of emitted photons to estimate the scatter distribution accurately (108). Transmission-based scatter correction methods use an attenuation map to define the inhomogeneous properties of the scattering object and derive a distribution of scattered events using line integrals calculated as part of the attenuation correction method. Algorithms belonging to this class of methods have been successfully applied in both SPECT (108–110) and PET (111,112). One method modifies convolution kernels used to estimate distribution of scatter events based on a CT (108) or transmission (109) derived attenuation map. More refined algorithms use a patient-specific attenuation map, an estimate of the emission image, the physics of radiation transport and Compton scattering, and a mathematic model of the scanner to calculate the number of events that have undergone Compton interactions. Simple modeling of the probability of a photon being scattered through a given angle and being detected in the emission energy window was also approximated using Monte Carlo-generated gaussian functions (110). Although computationally intensive, the model was integrated into a 2D projector/backprojector pair incorporated in an ML-EM reconstruction algorithm. The scatter correction software supplied by scanner manufacturers is evolving toward model-based methods in which the attenuation map is a required part of the 3D PET scatter correction procedure (111,112).
Motion Detection and Correction
In many cases, patient motion must be corrected when transmission data are used to generate an attenuation map that is subsequently used to correct radionuclide data for photon attenuation. When patient motion occurs between the transmission and emission scans, anatomic locations in the resulting attenuation map do not match corresponding points in the emission data. This can cause false hot spots or cold spots to be reconstructed erroneously in the emission tomogram when these spatial misregistration errors occur.
Correction techniques have been implemented for gross patient motion during dynamic or tomographic imaging and for anatomic changes associated with myocardial contraction and respiration. Perhaps the most direct method of compensating for involuntary motion is to record respiratory and electrocardiogram signals during acquisition of the emission and transmission data. Klein et al. (113,114) and Reutter et al. (115) have developed a doubly gated method, which records signals from respiratory and cardiac sensors that can be recorded with list-mode transmission and emission data during the imaging study. The respiratory signal is used to sort the emission and transmission data into time frames that represent similar respiratory states (114). The recorded electrocardiogram signal is then used to subdivide the data into time frames that represent similar respiratory and cardiac states.
In comparison with geometric changes, which are cyclic and can be monitored with physiologic signals, gross patient motion can be corrected in the acquired datasets only after the motion is detected and an appropriate mathematic transformation is calculated to compensate for the motion. Bailey et al. (116) implemented a method of monitoring patient motion during dynamic imaging with a scintillation camera using anatomic information extracted from sequentially acquired transmission scans. The size and direction of the misregistration error between 2 corresponding frames were calculated from the centroid of the attenuation image and used to align the corresponding emission frames. Similarly, Andersson et al. (117) used a single set of transmission data for attenuation correction of misaligned emission acquisitions. The 2 emission datasets were reconstructed without attenuation correction and were realigned to determine the transformation mapping of 1 emission dataset onto the other. Smith et al. (118) developed a similar method that applied the spatial transformations to register the emission and transmission projection data before reconstruction. The emission-projected data were corrected for photon attenuation and then tomographically reconstructed in a way that required fewer interpolations of the raw data than methods that apply the registration transformation directly to the reconstructed images.
Anatomically Guided Image Reconstruction
An undesirable property of the ML-EM algorithm is that large numbers of iterations increase the noise content of the reconstructed images. These noise characteristics can be controlled in iterative algorithms that incorporate a prior distribution to describe the statistical properties of the unknown image and thus produce a posteriori probability distributions from the image conditioned on the data. Bayesian reconstruction methods form a powerful extension of the ML-EM algorithm. Maximization of the a posteriori (MAP) probability over the set of possible images results in the MAP estimate (119). From the developer’s point of view, a significant advantage of this approach is its modularity: The various components of the prior, such as nonnegativity of the solution, pseudo-Poisson nature of statistics, local voxel correlations (local smoothness), or known existence of anatomic boundaries, may be added one by one into the estimation process, assessed individually, and used to guarantee a fast working implementation of preliminary versions of the algorithms.
A Bayesian model also can incorporate prior anatomic information derived from a registered CT (120) or MR (121,122) image in the reconstruction of functional emission images. This method incorporates a coupling term in the emission reconstruction that influences the creation of edges in the emission data that are correlated with the location of significant anatomic edges from the CT or MR images. This generally is implemented with a Gibbs prior distribution in a way that encourages the reconstructed image to be piecewise smooth. A Gibbs prior of piecewise smoothness can also be used in the Bayesian model. In this way, the development of dual-modality scanners producing both anatomic and functional image data is motivating investigation and implementation of Bayesian MAP reconstruction techniques.
Correction for Partial-Volume Effect
The accuracy of SPECT and PET for measuring regional radiotracer concentrations in the organs of interest is hampered by the low-resolution capability of the imaging system. The radionuclide image can accurately quantify the quantity and concentration of activity for sources with dimensions equal or larger than twice the size of the system’s spatial resolution measured in terms of its full width at half maximum. Smaller sources only partly occupy this characteristic volume. Although the total number of counts is conserved in emission images of small radionuclide objects, the counts are spread over a bigger volume than the physical size of the object by the limited spatial resolution of the imaging system. In these cases, the resulting radionuclide image no longer reflects the object’s activity concentration, but only its amount. This phenomenon is called the partial-volume effect (PVE) and can be corrected using recovery coefficients determined in a calibration measurement for objects of simple geometric shape. This method works on anatomic structures that can be approximated by simple geometric objects. More sophisticated approaches were also developed to correct for this effect knowing the shape and size of corresponding structures as assessed by structural imaging (MRI or CT). The corrections for the PVE must be applied after the radionuclide data are compensated for photon attenuation, scattered radiation, and other physical perturbations by incorporating mathematic models of these physical effects directly into the reconstruction algorithm.
Several methods have been described in the literature combining SPECT and MRI (123), PET and MRI (124,125) for brain imaging, and SPECT and CT (63,126) for cardiac imaging. Koole et al. (123) compared 2 methods for quantifying regional uptake in the different compartments of a phantom simulating the basal ganglia based on a matched high-resolution MR image. The first method corrected the SPECT image with multiplicative factors in 5 tissue compartments derived from the segmented MR image. The second method simulates the degradation due to nonstationary spatial resolution of the SPECT camera by convolving the tissue maps generated from the MR image with an incremental layer-by layer blurring with 5-point cross-convolution kernels. These studies have shown that the second method performs better in the presence of background activity. A general framework that corrects for PVE simultaneously in all identified brain regions uses a PET simulator to calculate recovery and cross-contamination factors for identified tissue components in a brain model (124). Meltzer et al. (125) compared 2- and 3-compartment MR-based methods for PVE correction using simulations and a multicompartment phantom. It was concluded that even though the 3-compartment approach performs better in terms of absolute quantitative accuracy; the 2-compartment algorithm is more appropriate for comparative PET studies.
Definition of Anatomically Guided ROIs
The advent of image fusion methods and dual-modality imaging systems also has enabled new methods of quantifying uptake in radionuclide images. One such technique has been implemented by Klein et al. (127), who defined 3D VOIs on a MRI dataset to quantify radionuclide uptake in a coregistered multislice PET study. The VOI is defined in the MR image slices and transformed to the coordinate space of the PET image. The technique then projects the VOI into the sinogram space to quantify the emission data so that the statistical properties of the measurements can be established. Correction factors are also applied to account for differences in voxel sizes between the MRI and PET data and for possible errors contributed by summing counts from voxels on the edge of each slice. By defining the VOIs on high-resolution MRI datasets, this technique can overcome spill-out effects and other errors related to the limited spatial resolution of the radionuclide image.
A similar method for quantifying SPECT images using a priori data from CT has been developed by Tang (128). In this technique, called template projection, VOIs for the target (e.g., tumor) and background regions are defined on the high-resolution CT image. These VOIs are used as templates that represent idealized radionuclide-containing objects (e.g., tumor vs. background) with unit radionuclide concentration. Planar imaging of these ideal radionuclide objects is simulated mathematically by projecting the templates onto the plane of the radionuclide detector. This is performed using the known geometric transformation between the coordinate axes of the CT and radionuclide imaging systems and includes physical models of photon attenuation and nonideal collimation. The projection operation generates projected templates that are analogous to conventional VOIs in that the projected templates specify the geometry of the tumor and background regions on the projected planar radionuclide images. However, the projected templates are defined initially on the high-resolution CT images rather than on the low-resolution radionuclide images. Furthermore, unlike traditional VOIs, which are uniform, the projected templates are nonuniform and contain information about physical effects included in the projector model.
Several methods can be used to quantify radionuclide uptake using the projected templates described above (128). For example, a data- fitting algorithm can be used to decompose the counts measured in a conventional VOI into contributions provided by specific anatomic regions specified by the projected templates. Because the projected templates are defined on a high-resolution CT image, they can account for spatial overlap, spatial resolution losses, photon attenuation, and other effects that cannot be derived from the radionuclide image alone. The goal of these methods is to extract more accurate radionuclide uptake measurements using correlated CT and radionuclide images than can be achieved using radionuclide imaging alone (129).
The template projection technique described above for planar imaging can be extended to quantify target regions in tomographic images (99,128,130) using a method called template projection-reconstruction. This process begins by repeating the template projection operation for all angles sampled by the tomographic acquisition of the real radionuclide data. After the template projection data are modeled for the target and for the neighboring objects, they are reconstructed with the same reconstruction algorithms (e.g., ML-EM or FBP) used for reconstructing the emission data. The reconstructed templates contain information about physical effects included in the modeling process and can be used to quantify the emission tomographic data. The corrected mean activity concentration in the target region can be calculated by voxel-by-voxel rescaling using the reconstructed template for the background region to correct for spill-in of background activity into the target region. In addition, the background-subtracted pixel values can be divided by values from the reconstructed templates to correct for spill-out to correct for partial-volume errors in the measured radioactivity concentrations in the target regions. The voxel-by-voxel correction is analogous to scaling the reconstructed data with object size- and shape-dependent recovery factors (57).
The template projection-reconstruction technique described above has been tested in a phantom experiment using the combined x-ray SPECT/CT imaging system at the University of California, San Francisco (128). This study quantified the radionuclide in spheric lesions of 2 sizes (i.e., 2.7 and 52 mL) with different tumor-to-background concentrations. Radionuclide quantitation was performed using (a) a conjugate view technique (57) and (b) using SPECT alone in which the radionuclide images were reconstructed with an ML-EM algorithm that included corrections for photon attenuation and for the geometric response of the collimator. Finally, these values were compared against those obtained using template projections and with template projection-reconstruction by defining object and background ROIs directly on the CT images. Radionuclide uptake values in the large lesion (52 mL) gave errors in the range of 9% to 61% for conjugate views, and −23% to −4% for SPECT, but were limited to −4% to 5% for template projection and −3% to 4% for template projection-reconstruction. For the smaller lesion (2.7 mL), errors ranging from −31% to 529% for conjugate views, −41% to −10% for SPECT, versus −4% to 10% for template projection and −27% to −5% for template projection-reconstruction were reported. These values illustrate that measurements that use a priori structural information from CT (i.e., template projection and template projection-reconstruction) can be more accurate than those obtained by defining ROIs directly on the planar (i.e., conjugate views) and tomographic (SPECT quantitation) radionuclide images.
Kalki et al. (63) and Da Silva et al. (126) evaluated the template projection-reconstruction method in a porcine model of myocardial perfusion using in vivo radionuclide imaging of 99mTc-sestamibi with a SPECT/CT system. In both studies, SPECT/CT data were used to reconstruct radionuclide tomograms, including corrections for collimator response and photon attenuation. The template projection-reconstruction method quantified the regional myocardial concentration of the radiopharmaceutical with an absolute accuracy error of approximately −10% in the intact animal compared with direct measurements on excised tissue. Reconstructions with corrections for neither photon attenuation nor collimator response produced images with an absolute error of approximately −90%. In comparison, in vivo values obtained with attenuation correction alone had an absolute error of −55%. These results demonstrate that the accuracy can be increased when a priori information from CT is incorporated in the radionuclide quantitation measurement.
Dosimetric, Logistic, and Computing Considerations
Few dose estimates have been reported in the literature on transmission scanning in emission tomography. Unfortunately, many studies were very superficial and reported very approximate estimates of the maximum absorbed dose at the skin surface using survey meters rather than effective dose equivalent (EDE) values (a quantity that is suitable for comparing risks of different procedures in nuclear medicine, radiology, and other applications involving ionizing radiation) using anthropomorphic phantoms and suitable dosimeters (41,131).
Table 1 summarizes EDE estimates for different transmission scanning sources and geometries. It is worth noting the discrepancy between results reported in the literature even when using the same experimental setup, scanning source geometries, and dosimeters. The discrepancies may be explained by differences in positioning of thermoluminescent dosimeters and other uncontrolled factors. The use of sliding collimated line sources and parallel collimation allows a significant dose reduction when compared with a static collimated line source and fanbeam collimation (131). Van Laere et al. (41) reported an EDE rate of 32 μSv/GBq·h, whereas only 0.52 μSv/GBq·h was reported by Almeida et al. (131) when using uncollimated and collimated 153Gd transmission line sources, respectively, for typical brain scanning. A scanner manufacturer used a low-dose x-ray device that adds a dose that is 4-fold lower than that of a state-of-the-art CT scanner. The patient dose for a typical scan ranges from 1.3 mGy at the center of the CT dose index phantom (32-cm-diameter plastic cylinder) to 5 mGy at the surface (skin dose) (61). The radiation dose for another system was 3.4 mGy per slice at the center and 4.3 mGy per slice at the surface of a 16-cm-diameter tissue-equivalent phantom (62). These results serve only as guides because, in most practical situations, transmission scanning source geometries have not been characterized for each examination and specific camera using extensive Monte Carlo simulations or experimental measurements with anthropomorphic phantoms. To the best of our knowledge, these estimated numbers are the only values that are available to end-users. Further effort needs to be devoted to fully characterize transmission scanning in emission tomography.
It is likely that if attenuation correction could be obtained rapidly and accurately, the benefits it adds to whole-body emission tomography would be considered sufficient to offset the increased acquisition time. Fortunately, several methods are under development or already have been implemented to perform attenuation correction more rapidly, more accurately, and with less noise than conventional techniques. Although the earliest emission tomographic studies had transmission images performed before radiotracer administration and emission imaging, such protocols are considerably time-consuming because the transmission and emission imaging sessions are completely separated. Methods that acquire transmission data after tracer injection now are performed more commonly (137). Such methods save time but may have disadvantages, with potential artifacts if tracer activity is changing in location or intensity during data acquisition. However, a promising approach is the use of segmentation algorithms both to reduce the noise from the attenuation images and to reduce the time of acquisition by computer-aided classification of tissue density into a few discrete categories based on the transmission scan and prior knowledge of expected tissue attenuation coefficients. Indeed, algorithms with only a few minutes of acquisition per level have been developed using 68Ge and 137Cs sources (88). More rapid data collection using either higher photon flux single-photon or x-ray sources is also promising. Combined SPECT/CT or PET/CT devices acquire transmission data rapidly and simplify the process of registering the emission data to the CT-derived anatomic images (61,62,64,97). The cost of combined ECT/CT systems may be prohibitive for small nuclear medicine departments, and correlation of data acquired from different scanners suffers from the usual problems of working with multimodality images—namely, the difficulty of accurate coregistration from the different modalities. Nevertheless, dual-modality imaging is gaining in popularity and in acceptance for several important clinical applications.
Considering the difficulties associated with transmission-based attenuation correction and the limitations of current calculated attenuation correction, if they were readily available, transmissionless attenuation correction might be the method of choice for the foreseeable future as a second best approach in a busy clinical department and remains a focus of many research groups. However, most manufacturers of nuclear medicine instrumentation still rely on transmission-based attenuation correction methods and have not implemented sophisticated transmissionless methods for clinical use. Attractive methods using transmissionless techniques to compensate for photon attenuation use either continuous (48–51) or discrete (53,54) consistency conditions. Some of these methods are considerably more computationally intensive than conventional techniques (44) and, at present, the computational time achieved with common computing facilities available in nuclear medicine departments remains prohibitive, especially for large-aperture cameras with fine sampling. However, with the development of more powerful multiple-processor parallel architectures and the potential for using these in conjunction with intelligent algorithms, the challenges of implementing transmissionless attenuation correction techniques may become more tractable. It should be pointed out that even if much worthwhile research has been performed in this area, there is no clear evidence from the published literature regarding the applicability of these techniques in a clinical environment.
CONCLUSION AND FUTURE PROSPECTS
Continuous efforts to integrate recent research findings for the design of optimal transmission scanning geometries have become the goal of both the academic community and the nuclear medicine industry. Systems are being designed for the range of needs from low-cost clinical applications to demanding research studies. Furthermore, all of these systems undergo frequent revisions in both hardware and software components. New technologies that are emerging include the use of LSO and gadolinium orthosilicate (GSO) as alternatives to bismuth germanate (BGO) detectors in PET, the use of layered crystals, and other schemes to compensate for depth-of-interaction errors. An important outcome of research performed in the field is the improvement of the cost-performance trade-offs and increased patient throughput of clinical SPECT/PET systems. Combined and multimodality (SPECT/PET/CT) imaging systems have been introduced for acquiring coregistered anatomic and functional images in a way that enables accurate attenuation map computation and effective attenuation correction. Thus, there are many different design paths being pursued and it still is uncertain which system designs will be retained for clinical utilization as nuclear medicine technology undergoes continued development.
Another aspect that deserves special attention, but which is not discussed in this article, is the need for efficient and accurate quality control and quality assurance techniques. Quality control procedures for transmission-emission tomographic systems are needed for ensuring a good-quality attenuation map and accurate attenuation correction. Currently, quality control for emission-transmission imaging is still undergoing development and only a few guidelines have been proposed for routine implementation.
Remarkable achievements and continuing efforts to enhance the performance of radionuclide imaging systems have improved both the visual quality and the quantitative accuracy of emission tomography. However, physical effects such as photon attenuation, scatter, spatial resolution characteristics, and statistical noise still limit the performance of emission tomography. The accuracy of attenuation correction in emission tomography has been validated by either analytic (54) or Monte Carlo simulations (40), experimental phantom studies (88), animal studies (126), biopsy samples taken after imaging has been performed (138), and receiver-operating-characteristic-based analysis of clinical data (139). The majority of attenuation correction methods described in the literature have been applied primarily to computer-simulated images and simplified experimental arrangements. Some solutions to the problem of attenuation correction are less suitable for routine applications in patients than they are in phantom simulations. The results reported in the literature relating to the quantitative accuracy of emission tomography very much depend on multiple factors, including the phantom geometry, source size and distribution, noise characteristics, and the type of scanner. Quantitative data can be achieved within 10% when simulating clinically realistic activity distributions in anthropomorphic phantoms and would be adequate for the majority of clinical applications. However, the accuracy obtained in phantom studies is unlikely to be reached in clinical investigations and the true clinical feasibility of these methods has yet to be fully investigated.
The role of PET during the past decade has evolved rapidly from that of a pure research tool to a methodology of enormous clinical potential (140). 18F-FDG PET is widely used in the diagnosis, staging, and assessment of tumor response to therapy because metabolic changes generally precede the more conventionally measured parameter of change in tumor size. Data are accumulating rapidly to validate the efficacy of FDG imaging in a wide variety of malignant tumors with sensitivities and specificities often in the high 90 percentile range. Although metabolic imaging is an obvious choice, the design of specific clinical protocols is still under development. The tracers or combinations of tracers to be used, when the imaging should be done after therapy, the selection of optimal acquisition and processing protocols, the method of accurately performing quantitative or semiquantitative analysis of data (e.g., standard uptake value), are still undetermined (141). Moreover, each tumor-therapy combination may need to be independently optimized and validated.
We expect that whole-body FDG PET-based techniques may be accurate and cost-effective for staging or restaging of different cancer types and can contribute to the improvement of cancer patients’ management and monitoring with respect to medical cost. In addition to improving overall scanner performance, further work will focus on implementing practical corrections for patient-related perturbations such as nonhomogeneous scatter and photon attenuation. During the next few years, it is believed that such sophisticated techniques will become more widely available in clinical settings and not only limited to research studies in nuclear medicine departments with advanced scientific and technical support. Therefore, it is expected that commercial hardware and software for accurate attenuation correction will undertake major revisions in the future to be capable of facing emerging clinical applications and challenging research perspectives.
Acknowledgments
The authors gratefully thank Dale Bailey, PhD, of the Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia, for the valuable comments and suggestions he made to this article. This work was supported by the Swiss National Science Foundation under grant SNSF 3152-062008.
Footnotes
Received May 2, 2002; revision accepted Aug. 21, 2002.
For correspondence or reprints contact: Habib Zaidi, PhD, Division of Nuclear Medicine, Geneva University Hospital, CH-1211 Geneva 4, Switzerland.
E-mail: habib.zaidi{at}hcuge.ch