Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
    • Continuing Education
    • JNM Podcasts
  • Subscriptions
    • Subscribers
    • Institutional and Non-member
    • Rates
    • Journal Claims
    • Corporate & Special Sales
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Reviewers
    • Permissions
    • Advertisers
  • About
    • About Us
    • Editorial Board
    • Contact Information
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • View or Listen to JNM Podcast
  • Visit JNM on Facebook
  • Join JNM on LinkedIn
  • Follow JNM on Twitter
  • Subscribe to our RSS feeds
Research ArticlePhysics and Instrumentation

Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning

Donghwi Hwang, Kyeong Yun Kim, Seung Kwan Kang, Seongho Seo, Jin Chul Paeng, Dong Soo Lee and Jae Sung Lee
Journal of Nuclear Medicine October 2018, 59 (10) 1624-1629; DOI: https://doi.org/10.2967/jnumed.117.202317
Donghwi Hwang
1Department of Biomedical Sciences, Seoul National University, Seoul, Korea
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kyeong Yun Kim
1Department of Biomedical Sciences, Seoul National University, Seoul, Korea
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Seung Kwan Kang
1Department of Biomedical Sciences, Seoul National University, Seoul, Korea
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Seongho Seo
3Department of Neuroscience, College of Medicine, Gachon University, Incheon, Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jin Chul Paeng
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
4Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Dong Soo Lee
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
4Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea; and
5Department of Molecular Medicine and Biopharmaceutical Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jae Sung Lee
1Department of Biomedical Sciences, Seoul National University, Seoul, Korea
2Department of Nuclear Medicine, Seoul National University, Seoul, Korea
4Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. Methods: We applied the proposed method to one of the most challenging PET cases for simultaneous image reconstruction (18F-fluorinated-N-3-fluoropropyl-2-β-carboxymethoxy-3-β-(4-iodophenyl)nortropane [18F-FP-CIT] PET scans with highly specific binding to striatum of the brain). Three different CNN architectures (convolutional autoencoder [CAE], Unet, and Hybrid of CAE) were designed and trained to learn a CT-derived μ-map (μ-CT) from the MLAA-generated activity distribution and μ-map (μ-MLAA). The PET/CT data of 40 patients with suspected Parkinson disease were used for 5-fold cross-validation. For the training of CNNs, 800,000 transverse PET and CT slices augmented from 32 patient datasets were used. The similarity to μ-CT of the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) and μ-MLAA was compared using Dice similarity coefficients. In addition, we compared the activity concentration of specific (striatum) and nonspecific (cerebellum and occipital cortex) binding regions and the binding ratios in the striatum in the PET activity images reconstructed using those μ-maps. Results: The CNNs generated less noisy and more uniform μ-maps than the original μ-MLAA. Moreover, the air cavities and bones were better resolved in the proposed CNN outputs. In addition, the proposed deep learning approach was useful for mitigating the crosstalk problem in the MLAA reconstruction. The Hybrid network of CAE and Unet yielded the most similar μ-maps to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. Conclusion: The proposed deep learning approach is promising for accurate attenuation correction of activity distribution in time-of-flight PET systems.

  • deep learning
  • simultaneous reconstruction
  • crosstalk
  • denoising
  • quantification

The attenuation correction of annihilation photons is a critical procedure in PET image generation for providing accurate quantitative information on the radiotracer distribution. In current PET/CT systems, the linear attenuation coefficient (μ) for 511-keV photons is converted from the CT Hounsfield unit (1,2). In PET/MRI, various approaches, including Dixon and ultrashort echo-time MRI segmentation- and atlas-based algorithms, have been suggested (3–6). However, a limitation of CT-based PET attenuation correction is the artifacts attributed to the position mismatch between the PET and CT scans (7–9). MRI-based PET attenuation correction remains far from ideal on account of the inaccurately estimated linear attenuation coefficients (μ-values) in the skeletal structures and heterogeneous soft tissues (10–12). In particular, bones are poorly identified in whole-body PET/MRI studies (13) and local MRI signal loss produced by metallic implants results in the considerable error in image segmentation.

Simultaneous reconstruction of activity and attenuation using only emission data is a promising method for the PET attenuation correction augmented by the recent advancement of time-of-flight technology (14–17). Because no anatomic images are necessary for the attenuation correction if the simultaneous reconstruction works properly, it is a potentially significant approach to overcoming the above-mentioned limitations of PET attenuation correction in PET/CT and PET/MRI (18–20). Among the simultaneous reconstruction algorithms for PET attenuation correction, the maximum likelihood reconstruction of activity and attenuation (MLAA) method has the advantages of providing an μ-map and enabling the incorporation of prior knowledge of the μ-values for global scaling (15,16,21,22). However, because of the limited timing resolution of current clinical PET scanners, the MLAA suffers from several problems, including the crosstalk artifacts (between activity and μ-maps), slow convergence speed, and noisy μ-maps (23,24).

Recently, deep learning has outperformed the traditional machine learning and Bayesian approaches in many different applications (25,26). In addition, recent studies have shown the remarkable advancements in the noise reduction of CT based on deep learning technology (27,28). Accordingly, it is of interest whether the deep learning approach can mitigate the limitations of MLAA simultaneous reconstruction. In this study, we therefore designed deep convolutional neural networks (CNNs) to be suitable for MLAA output (activity distribution and μ-map) processing. We examined the quality improvement of MLAA μ-maps and emission PET images by applying deep learning with a focus on noise and crosstalk reduction.

We applied this new approach to one of the most challenging clinical PET cases for simultaneous reconstruction (brain dopamine transporter imaging). The crosstalk between the activity and attenuation is severe and the background noise level is high in the dopaminergic PET images because of the highly specific binding of the tracers only in the striatum of brain.

MATERIALS AND METHODS

Dataset

The 18F-fluorinated-N-3-fluoropropyl-2-β-carboxymethoxy-3-β-(4-iodophenyl)nortropane (18F-FP-CIT) brain PET/CT scan data of 40 patients (16 men and 24 women; mean age ± SD, 67.5 ± 9.2 y) with suspected Parkinson disease were retrospectively analyzed. In 14 of the 40 subjects, the tracer uptake in both basal ganglia was preserved. The retrospective use of the scan data and waiver of consent were approved by the Institutional Review Board of our institute. The PET/CT data were acquired using a Biograph mCT 40 scanner (Siemens Healthcare). The PET scanner achieves an effective timing resolution of 580 ps. The PET/CT imaging was performed for 10 min at a single PET bed position 90 min after the intravenous injection of 18F-FP-CIT (189.7 MBq on average). The head of each participant was positioned in a head holder attached to the patient bed, and the PET/CT scan followed the routine clinical protocol for brain studies (topogram, CT, and emission PET scans). The CT images were reconstructed in a 512 × 512 × 149 matrix with a voxel size of 0.59 × 0.59 × 1.5 mm and converted into the μ-map for 511-keV photons (μ-CT, 200 × 200 × 109; 2.04 × 2.04 × 2.03 mm).

We reconstructed all datasets using ordered-subset expectation maximization (OSEM) with μ-CT (3 iterations, 21 subsets, 5-mm gaussian postprocessing filter) and MLAA with the time-of-flight information (8 iterations and 21 subsets, 5-mm gaussian postprocessing filter) into 200 × 200 × 109 matrices. To correct the global scaling problem, the boundary constraint suggested in the original time-of-flight MLAA paper (15) was applied during the attenuation image estimation in the MLAA.

To evaluate the performance of proposed CNNs, we performed 5-fold cross-validation. The 40 patient datasets were randomly partitioned into 5 groups (8 in each group). The CNNs were trained with 4 groups and tested with the other one. This cross-validation process with different test sets was repeated 5 times. For the CNN training and testing, the activity distribution and μ-map derived from the MLAA (λ-MLAA and μ-MLAA) were used as input X, and μ-CT was used as output Y. All the input and output images were used in 2-dimensional slice format.

Network Architecture

We tested 3 different CNN architectures (Fig. 1). The first one was the convolutional autoencoder (CAE). The autoencoder was originally proposed for unsupervised feature learning; nevertheless, it also showed good performance for image restoration and denoising networks (29). The second one was Unet, which showed excellent performance in various tasks, including image segmentation and denoising (30). Unet structures are similar to those of the CAE. However, unlike CAE, Unet supplements the contracting path that enables high-resolution features to be combined in the output layers. The third one was the Hybrid form of CAE and Unet, which we propose herein to prevent the noise propagation from the high-frequency feature of the PET activity distribution (Hybrid network).

FIGURE 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1.

CNN architectures used to learn μ-CT from λ-MLAA and μ-MLAA. (A) CAE. (B) Unet. (C) Hybrid network of CAE and Unet. Green and red vertical strips at far left indicate inputs to CNN, and red stripes at right indicate output. Each box represents multichannel feature map. Number of feature maps and dimension of each feature map are denoted on interior and bottom of box. Data flow is left to right through contracting path to capture context and symmetric expanding path to recover image. Arrows stand for copying feature maps, and sky-blue boxes are copied feature map.

The 3 networks (CAE, Unet, and Hybrid) consisted of convolution layers, rectified linear units (an activation function defined as f(x) = max(0, x) and used to provide nonlinearity in the learning model), 2 × 2 max-pooling layers, deconvolution layers, and a 1 × 1 convolution layer (Fig. 1). The max-pooling, a reduction operation for calculating the maximum value in each rectangular window, was required to reduce the number of parameters of the network and provide a shift-invariant characteristic to CNN. In the first layer, we performed a convolution with a 3 × 3 × 2 kernel to merge 2 input datasets (MLAA activity distribution and μ-map). Each convolution and deconvolution layer except the first convolution layer was composed of a 3 × 3 kernel and rectified linear units. The last 1 × 1 convolution layer performed a role in scaling. The number of feature maps in the first layer was empirically determined to yield the best results. We implemented the networks using the TensorFlow, an open-source library for deep learning (31).

Data Augmentation and Training

Because the number of parameters in the deep networks was too large to be estimated from the limited patient dataset, we had to increase the training data (data augmentation). We conducted the data augmentation by rotating the images by −6°, −3°, 0°, 3°, and 6° in 3-dimensional orthogonal planes (5 × 5 × 5 = 125 times augmentation). Additionally, we used images flipped in the transverse plane to double the training set. Accordingly, the total number of slices available for training was 32 (patient) × 109 (transverse plane) × 125 (rotating) × 2 (flip) = 872,000. Among them, only 800,000 slices were used as the training set after eliminating slices with only negligibly small pixel values and the last 5 slices at the bottom.

The cost function was the L2-norm between the MLAA μ-map and μ-CT. The cost function was minimized using the adaptive moment estimation method (32). Weights in the networks were initialized using the Xavier initialization method, which engendered a faster convergence rate compared with uniform or gaussian random initialization (33). To prevent network overfitting, a part of the nodes was dropped out (34). In each convolution layer, the dropout probability (ratio of remaining node number of total) was 0.7. The batch size was 60, and the number of epochs (the number of times the algorithm sees the entire dataset) was 6. When using the Ryzen 1700X central processing unit with a GTX 1080 graphics processing unit, each epoch involved approximately 300 min.

Image Analysis

The μ-maps obtained using the MLAA before (μ-MLAA) and after (μ-CAE, μ-Unet, and μ-Hybrid) applying the deep CNNs to the test set were compared with the μ-CT, the ground truth. The similarity of μ-maps was evaluated using the Dice similarity coefficient (D) (3,35), which measured the overlap of the segmented bone and air regions according to the following equation:Embedded Imagewhere Nμ-CT and Nμ-PET are, respectively, the number of bone (or air) voxels in the μ-maps derived from CT and PET (emission only) data (3,35). N(μ-CT∩μ-PET) indicates the number of overlapped voxels between CT and PET μ-maps. In the μ-maps, the voxels having a μ-value of more than 0.1134 (=300 Hounsfield units) were classified as bone; those having a μ-value of less than 0.0475 (= −500 Hounsfield units) were denoted as air. Additionally, the voxels having a μ-value between them were regarded as soft tissue (3,36).

For comparison with the ground truths of the PET activity distribution obtained using OSEM reconstruction with μ-CT, the activity images were generated using the same OSEM algorithm and parameters (8 iterations and 21 subsets, 5-mm gaussian postprocessing filter) with μ-MLAA, μ-CAE, μ-Unet, and μ-Hybrid (Fig. 2). The ground truth PET activity was spatially normalized using an in-house 18F-FP-CIT PET template and Statistical Parametric Mapping software (version 8; http://www.fil.ion.ucl.ac.uk/spm). The same transformation parameters were applied to the others. Then, we measured the PET activity concentration in 4 regions of interest (head of caudate nucleus, putamen, occipital cortex, and cerebellum) using an automatic region-of-interest delineation method with statistical probabilistic anatomic maps (37,38). For the comparison, the relative ratio of specific binding ([Cspecific – Cnonspecific]/Cnonspecific) was calculated (Cspecific and Cnonspecific are the activity concentrations in specific and nonspecific [cerebellum or occipital cortex] binding regions) (3).

FIGURE 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2.

Flow chart of image analysis. For comparison, emission PET sinogram was reconstructed using μ-maps obtained using MLAA before (μ-MLAA) and after (μ-CAE, μ-Unet, and μ-Hybrid) applying deep CNNs and ground truth μ-CT. TOF = time of flight.

RESULTS

The CNNs remarkably reduced the noise and crosstalk in μ-MLAA. In Figure 3, the CNN-generated μ-maps (μ-CAE, μ-Unet, and μ-Hybrid) of a patient are compared with the μ-CT and μ-MLAA. As expected, the CNNs generated less noisy and more uniform images than μ-MLAA. Among the CNNs, the CAE and Unet yielded the most blurred and sharpened μ-maps, respectively. The air cavities and bones were better resolved in the proposed CNN outputs than with the μ-MLAA. However, the details of these structures did not perfectly match the μ-CT. Moreover, slight discontinuities of air cavities and bone structures still appeared in the sagittal and coronal planes because of the application of the CNNs to the 2-dimensional slices. Figure 3 also shows that the proposed deep learning approach is useful for mitigating the crosstalk problem in MLAA reconstruction. The red arrows on μ-MLAA point to the striatal region where the crosstalk between activity and attenuation is substantial in MLAA outputs. This artifact disappears in the μ-maps corrected by deep learning (μ-CAE, μ-Unet, and μ-Hybrid).

FIGURE 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3.

Comparison of CNN outputs (μ-CAE, μ-Unet, and μ-Hybrid) to μ-MLAA and μ-CT. Red and yellow arrows indicate, respectively, crosstalk artifacts and bone estimation error shown in μ-MLAA.

The CNN-generated μ-maps showed higher similarity with the μ-CT than the original μ-MLAA did. In Figure 4, root-mean square errors from the μ-CT are plotted across the slice axial location (average of all 40 test subjects). The CNN-generated μ-maps yielded fewer errors than μ-MLAA in almost all axial locations. The Hybrid network outperformed the CAE and Unet at the top of the head, and it achieved approximately 50% error reduction relative to the original MLAA in the μ-value estimation. The bias and root-mean square error of μ-maps relative to the μ-CT are summarized in Supplemental Figure 1; supplemental materials are available at http://jnm.snmjournals.org.

FIGURE 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 4.

Root-mean square errors (RMSE) relative to μ-CT plotted across slice axial location (average of 40 test sets).

As shown in Table 1, the Dice similarity coefficients measured in the whole head and only the cranial bone regions for air and bone were generally much higher in the CNN-generated μ-maps. The SD of the Dice similarity coefficients were considerably smaller in the CNN-generated μ-maps than in those of μ-MLAA, indicating improvement in the consistency of μ-value estimation. In μ-MLAA, the skull intensity and thickness were under- or overestimated in some regions (yellow arrows in Fig. 3). However, the CNNs properly corrected these errors. Among the CNNs, the Hybrid network and CAE yielded the respective highest and lowest Dice similarity coefficients in all the regions. The Supplemental Figure 2 shows that these results were consistent across all the cross-validation sets.

View this table:
  • View inline
  • View popup
TABLE 1

Dice Similarity Coefficients with μ-CT for Whole Head and Cranial Bone Region

The enhancement in μ-map quality and accuracy by applying the deep CNNs improved the accuracy in the quantification of the regional activity and binding ratio of 18F-FP-CIT PET. The percentage error map of the spatially normalized activity distribution (average of 40 test subjects) is compared in Figure 5, indicating the reduced error in activity distribution with Unet and Hybrid network. Meanwhile, Figure 6 shows the percentage error in activity and binding ratio estimation relative to the ground truth (OSEM with μ-CT). The μ-MLAA yields a negative bias in activity quantification that is higher than 10% in the occipital cortex and striatum. The error is reduced using μ-Unet and μ-Hybrid.

FIGURE 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 5.

Percentage error map of spatially normalized activity distribution (average of 40 test sets).

FIGURE 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 6.

Percentage error in activity (A) and binding ratio (B) estimation relative to ground truth (OSEM with μ-CT). Each horizontal bar and vertical box indicates median and SD, respectively. In B, specific and nonspecific regions for binding ratio calculation are indicated as “specific (nonspecific).”

DISCUSSION

Supervised and unsupervised machine learning methods based on artificial neural networks have been investigated for various biomedical engineering applications (39,40). Learning the difference between the patient and control data and predicting the prognosis after treatment based on region-of-interest–driven features were the main application field in nuclear medicine image interpretation and processing (41–43). Additionally, data-driven blind source separation techniques based on an unsupervised neural network were successfully applied to dynamic PET data for the separation of various physiologic and anatomic components (44–47). The use of artificial neural network techniques has been also suggested for more accurate and reliable determination of the annihilation photon interaction position in PET detector blocks (48,49). Meanwhile, deep learning, an emerging technology in machine learning, is showing its initial impact on the medical imaging field (50). However, problem-specific design and optimization of deep networks and rigorous validation with real clinical data are required to justify the medical use of this emergent technology.

One of the main limitations of simultaneous activity and attenuation reconstruction is the crosstalk artifacts between the activity and attenuation in output images. These crosstalk artifacts are most severe in regions with a high contrast against the background, which may be the abnormal uptake of radiotracer requiring high accuracy in activity quantification. The deep CNNs proposed in this paper outperformed the original MLAA algorithm in suppressing the crosstalk and noise in the dopamine transporter PET images. The mitigation of crosstalk artifacts by the CNNs was not simply the consequence of reducing the noise (or suppressing the high-frequency features) in μ-maps and recognizing the location of crosstalk artifacts (Supplemental Fig. 4). Only when the activity and attenuation information were jointly processed by the 3-dimensional convolution kernel at the first layer of the networks was this crosstalk successfully suppressed. The joint feature learning from both the activity and attenuation at the early stage was also useful for the accurate restoration of bone structures and air cavities in μ-maps (Fig. 3). The lower radioactivity in the bone and air relative to the soft tissue and cerebrospinal fluid would enable the CNNs to learn how to correctly differentiate them.

Most CNN parameters were empirically determined through trial and error. The performance of CNN was not much influenced by the kernel size (i.e., 5 or 7 yielded results similar to 3). The 20 feature maps in the first layer yielded the lowest and most stable learning curve. In contrast, the learning curve did not converge with feature maps smaller than 12 and showed an overshoot at early iterations with the maps larger than 28. Learning rate and other parameters were also determined mostly while observing the learning curves.

The CNNs trained in this study yielded better μ-maps than our multiphase-level-set–based ultrashort echo-time MRI segmentation (3) with respect to the similarity to μ-CT. Particularly, the Dice similarity coefficients for air cavities were remarkably higher in the present study (0.72 vs. 0.61 in the whole head and 0.74 vs. 0.62 in the cranial region). Although the Hybrid method has a higher Dice coefficient than Unet, Unet performs slightly better than the Hybrid method in activity quantification (Figs. 5 and 6) because the Dice coefficient measures similarity in terms of segmented region overlap but does not measure similarity in terms of quantitative values. As shown in Supplemental Figure 1, the Unet yielded lower bias in average μ-values than the other methods, explaining why the activity maps estimated by the Hybrid method are less accurate than those estimated by Unet despite the better segmentation.

There are some existing works on applying deep learning to predict CT μ-maps based on T1-weighted MR images or a combination of Dixon and zero-echo-time images (51,52). The approach using the Dixon and zero-echo-time images would be more physically relevant than the T1-weighted MRI-based approach because the Dixon and zero-echo-time sequences provide more direct information on the tissue composition than does the T1 sequence. The method proposed in this study has the same physical relevance as the Dixon or zero-echo-time approach but does not require the acquisition of additional MR images.

CONCLUSION

In this work, we developed deep CNNs to overcome the main limitations of the MLAA simultaneous reconstruction algorithm. We verified their feasibility using an 18F-FP-CIT brain PET dataset. The proposed deep learning approach remarkably enhanced the quantitative accuracy of simultaneously estimated MLAA μ-maps by reducing the noise and crosstalk artifacts. The Hybrid network of CAE and Unet yielded μ-maps the most similar to μ-CT (Dice similarity coefficient in the whole head = 0.79 in the bone and 0.72 in air cavities), resulting in only about a 5% error in activity and binding ratio quantification. Because the proposed method requires no transmission data, anatomic image, or atlas/template for PET attenuation correction, it has potential to replace the conventional attenuation correction methods in stand-alone PET, PET/CT, and PET/MRI.

DISCLOSURE

This work was supported by the National Research Foundation of Korea (NRF), funded by the Korean Ministry of Science and ICT (grants NRF-2014M3C7034000, NRF-2016R1A2B3014645, and NRF-2017M3C7A1044367). The funding source was not involved in the study design, collection, analysis, or interpretation. No other potential conflict of interest relevant to this article was reported.

Footnotes

  • Published online Feb. 15, 2018.

  • © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

REFERENCES

  1. 1.↵
    1. Burger C,
    2. Goerres G,
    3. Schoenes S,
    4. Buck A,
    5. Lonn A,
    6. Von Schulthess G
    . PET attenuation coefficients from CT images: experimental evaluation of the transformation of CT into PET 511-keV attenuation coefficients. Eur J Nucl Med Mol Imaging. 2002;29:922–927.
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Kinahan PE,
    2. Hasegawa BH,
    3. Beyer T
    . X-ray-based attenuation correction for positron emission tomography/computed tomography scanners. Semin Nucl Med. 2003;33:166–179.
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. An HJ,
    2. Seo S,
    3. Kang H,
    4. et al
    . MRI-based attenuation correction for PET/MRI using multiphase level-set method. J Nucl Med. 2016;57:587–593.
    OpenUrlAbstract/FREE Full Text
  4. 4.
    1. Keereman V,
    2. Fierens Y,
    3. Broux T,
    4. De Deene Y,
    5. Lonneux M,
    6. Vandenberghe S
    . MRI-based attenuation correction for PET/MRI using ultrashort echo time sequences. J Nucl Med. 2010;51:812–818.
    OpenUrlAbstract/FREE Full Text
  5. 5.
    1. Martinez-Möller A,
    2. Souvatzoglou M,
    3. Delso G,
    4. et al
    . Tissue classification as a potential approach for attenuation correction in whole-body PET/MRI: evaluation with PET/CT data. J Nucl Med. 2009;50:520–526.
    OpenUrlAbstract/FREE Full Text
  6. 6.↵
    1. Yang J,
    2. Jian Y,
    3. Jenkins N,
    4. et al
    . Quantitative evaluation of atlas-based attenuation correction for brain PET in an integrated time-of-flight PET/MR imaging system. Radiology. 2017;284:169–179.
    OpenUrl
  7. 7.↵
    1. Gould KL,
    2. Pan T,
    3. Loghin C,
    4. Johnson NP,
    5. Guha A,
    6. Sdringola S
    . Frequent diagnostic errors in cardiac PET/CT due to misregistration of CT attenuation and emission PET images: a definitive analysis of causes, consequences, and corrections. J Nucl Med. 2007;48:1112–1121.
    OpenUrlAbstract/FREE Full Text
  8. 8.
    1. Liu C,
    2. Pierce LA II.,
    3. Alessio AM,
    4. Kinahan PE
    . The impact of respiratory motion on tumor quantification and delineation in static PET/CT imaging. Phys Med Biol. 2009;54:7345–7362.
    OpenUrlCrossRefPubMed
  9. 9.↵
    1. McQuaid SJ,
    2. Hutton BF
    . Sources of attenuation-correction artefacts in cardiac PET/CT and SPECT/CT. Eur J Nucl Med Mol Imaging. 2008;35:1117–1123.
    OpenUrlCrossRefPubMed
  10. 10.↵
    1. Keller SH,
    2. Holm S,
    3. Hansen AE,
    4. et al
    . Image artifacts from MR-based attenuation correction in clinical, whole-body PET/MRI. MAGMA. 2013;26:173–181.
    OpenUrlCrossRefPubMed
  11. 11.
    1. Kim JH,
    2. Lee JS,
    3. Song I-C,
    4. Lee DS
    . Comparison of segmentation-based attenuation correction methods for PET/MRI: evaluation of bone and liver standardized uptake value with oncologic PET/CT data. J Nucl Med. 2012;53:1878–1882.
    OpenUrlAbstract/FREE Full Text
  12. 12.↵
    1. Yoo HJ,
    2. Lee JS,
    3. Lee JM
    . Integrated whole body MR/PET: where are we? Korean J Radiol. 2015;16:32–49.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Fraum TJ,
    2. Fowler KJ,
    3. McConathy J
    . Conspicuity of FDG-avid osseous lesions on PET/MRI versus PET/CT: a quantitative and visual analysis. Nucl Med Mol Imaging. 2016;50:228–239.
    OpenUrl
  14. 14.↵
    1. Defrise M,
    2. Rezaei A,
    3. Nuyts J
    . Time-of-flight PET data determine the attenuation sinogram up to a constant. Phys Med Biol. 2012;57:885–899.
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Rezaei A,
    2. Defrise M,
    3. Bal G,
    4. et al
    . Simultaneous reconstruction of activity and attenuation in time-of-flight PET. IEEE Trans Med Imaging. 2012;31:2224–2233.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Salomon A,
    2. Goedicke A,
    3. Schweizer B,
    4. Aach T,
    5. Schulz V
    . Simultaneous reconstruction of activity and attenuation for PET/MR. IEEE Trans Med Imaging. 2011;30:804–813.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Son JW,
    2. Kim KY,
    3. Yoon HS,
    4. et al
    . Proof‐of‐concept prototype time‐of‐flight PET system based on high‐quantum‐efficiency multi‐anode PMTs. Med Phys. 2017;44:5314–5324.
    OpenUrl
  18. 18.↵
    1. Lee JS,
    2. Kovalski G,
    3. Sharir T,
    4. Lee DS
    . Advances in imaging instrumentation for nuclear cardiology. J Nucl Cardiol. July 17, 2017 [Epub ahead of print].
  19. 19.
    1. Presotto L,
    2. Busnardo E,
    3. Perani D,
    4. Gianolli L,
    5. Gilardi M,
    6. Bettinardi V
    . Simultaneous reconstruction of attenuation and activity in cardiac PET can remove CT misalignment artifacts. J Nucl Cardiol. 2016;23:1086–1097.
    OpenUrl
  20. 20.↵
    1. Rezaei A,
    2. Michel C,
    3. Casey ME,
    4. Nuyts J
    . Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET. Phys Med Biol. 2016;61:1852–1874.
    OpenUrl
  21. 21.↵
    1. Boellaard R,
    2. Hofman M,
    3. Hoekstra O,
    4. Lammertsma A
    . Accurate PET/MR quantification using time of flight MLAA image reconstruction. Mol Imaging Biol. 2014;16:469–477.
    OpenUrlCrossRefPubMed
  22. 22.↵
    1. Nuyts J,
    2. Dupont P,
    3. Stroobants S,
    4. Benninck R,
    5. Mortelmans L,
    6. Suetens P
    . Simultaneous maximum a posteriori reconstruction of attenuation and activity distributions from emission sinograms. IEEE Trans Med Imaging. 1999;18:393–403.
    OpenUrlCrossRefPubMed
  23. 23.↵
    1. Chun SY,
    2. Kim KY,
    3. Lee JS,
    4. Fessier JA
    . Joint estimation of activity distribution and attenuation map for TOF-PET using alternating direction method of multiplier. 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI). Piscataway, NJ: IEEE; 2016:86–89.
  24. 24.↵
    1. Mehranian A,
    2. Zaidi H
    . Joint estimation of activity and attenuation in whole-body TOF PET/MRI using constrained gaussian mixture models. IEEE Trans Med Imaging. 2015;34:1808–1821.
    OpenUrl
  25. 25.↵
    1. Bengio Y,
    2. Courville A,
    3. Vincent P
    . Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35:1798–1828.
    OpenUrlCrossRefPubMed
  26. 26.↵
    1. Schmidhuber J
    . Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117.
    OpenUrlCrossRefPubMed
  27. 27.↵
    1. Chen H,
    2. Zhang Y,
    3. Zhang W,
    4. et al
    . Low-dose CT via convolutional neural network. Biomed Opt Express. 2017;8:679–694.
    OpenUrl
  28. 28.↵
    1. Kang E,
    2. Min J,
    3. Ye JC
    . A deep convolutional neural network using directional wavelets for low-dose x-ray CT reconstruction. Med Phys. 2017;44:e360–e375.
    OpenUrl
  29. 29.↵
    1. Vincent P,
    2. Larochelle H,
    3. Lajoie I,
    4. Bengio Y,
    5. Manzagol P-A
    . Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res. 2010;11:3371–3408.
    OpenUrl
  30. 30.↵
    1. Ronneberger O,
    2. Fischer P,
    3. Brox T
    . U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York, NY: Springer; 2015:234–241.
  31. 31.↵
    1. Abadi M,
    2. Agarwal A,
    3. Barham P,
    4. et al
    . Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv.org website. https://arxiv.org/abs/1603.04467. Submitted March 14, 2016. Last revised March 16, 2016. Accessed June 11, 2018.
  32. 32.↵
    1. Kingma DP,
    2. Ba J
    . Adam: a method for stochastic optimization. arXiv.org website. https://arxiv.org/abs/1412.6980. Submitted December 22, 2014. Last revised January 30, 2017. Accessed June 11, 2018.
  33. 33.↵
    1. Glorot X,
    2. Bengio Y
    . Understanding the difficulty of training deep feedforward neural networks. Proceedings of Machine Learning Research website. http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf?hc_location=ufi. Published 2010. Accessed June 11, 2018.
  34. 34.↵
    1. Srivastava N,
    2. Hinton GE,
    3. Krizhevsky A,
    4. Sutskever I,
    5. Salakhutdinov R
    . Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–1958.
    OpenUrl
  35. 35.↵
    1. Juttukonda MR,
    2. Mersereau BG,
    3. Chen Y,
    4. et al
    . MR-based attenuation correction for PET/MRI neurological studies with continuous-valued attenuation coefficients for bone through a conversion from R2* to CT-Hounsfield units. Neuroimage. 2015;112:160–168.
    OpenUrlCrossRefPubMed
  36. 36.↵
    1. Catana C,
    2. van der Kouwe A,
    3. Benner T,
    4. et al
    . Toward implementing an MRI-based PET attenuation-correction method for neurologic studies on the MR-PET brain prototype. J Nucl Med. 2010;51:1431–1438.
    OpenUrlAbstract/FREE Full Text
  37. 37.↵
    1. Kang KW,
    2. Lee DS,
    3. Cho JH,
    4. et al
    . Quantification of F-18 FDG PET images in temporal lobe epilepsy patients using probabilistic brain atlas. Neuroimage. 2001;14:1–6.
    OpenUrlCrossRefPubMed
  38. 38.↵
    1. Lee JS,
    2. Lee DS
    . Analysis of functional brain images using population-based probabilistic atlas. Curr Med Imaging Rev. 2005;1:81–87.
    OpenUrlCrossRef
  39. 39.↵
    1. Mohammadi MR,
    2. Khaleghi A,
    3. Nasrabadi AM,
    4. Rafieivand S,
    5. Begol M,
    6. Zarafshan H
    . EEG classification of ADHD and normal children using non-linear features and neural network. Biomed Eng Lett. 2016;6:66–73.
    OpenUrl
  40. 40.↵
    1. Yoo Y
    . On predicting epileptic seizures from intracranial electroencephalography. Biomed Eng Lett. 2017;7:1–5.
    OpenUrl
  41. 41.↵
    1. Acton PD,
    2. Newberg A
    . Artificial neural network classifier for the diagnosis of Parkinson’s disease using [99mTc]TRODAT-1 and SPECT. Phys Med Biol. 2006;51:3057–3066.
    OpenUrlPubMed
  42. 42.
    1. Lee JS,
    2. Lee DS,
    3. Kim S-K,
    4. et al
    . Localization of epileptogenic zones in F-18 FDG brain PET of patients with temporal lobe epilepsy using artificial neural network. IEEE Trans Med Imaging. 2000;19:347–355.
    OpenUrlPubMed
  43. 43.↵
    1. Preis O,
    2. Blake MA,
    3. Scott JA
    . Neural network evaluation of PET scans of the liver: a potentially useful adjunct in clinical interpretation. Radiology. 2011;258:714–721.
    OpenUrlCrossRefPubMed
  44. 44.↵
    1. Lee JS,
    2. Lee DD,
    3. Choi S,
    4. Park KS,
    5. Lee DS
    . Non-negative matrix factorization of dynamic images in nuclear medicine. 2001 IEEE Nuclear Science Symposium Conference Record. Piscataway, NJ: IEEE; 2001:2027–2030.
  45. 45.
    1. Lee JS,
    2. Lee DS,
    3. Ahn JY,
    4. et al
    . Blind separation of cardiac components and extraction of input function from h215o dynamic myocardial pet using independent component analysis. J Nucl Med. 2001;42:938–943.
    OpenUrlAbstract/FREE Full Text
  46. 46.
    1. Naganawa M,
    2. Kimura Y,
    3. Ishii K,
    4. Oda K,
    5. Ishiwata K,
    6. Matani A
    . Extraction of a plasma time-activity curve from dynamic brain PET images based on independent component analysis. IEEE Trans Biomed Eng. 2005;52:201–210.
    OpenUrlCrossRefPubMed
  47. 47.↵
    1. Su K-H,
    2. Wu L-C,
    3. Liu R-S,
    4. Wang S-J,
    5. Chen J-C
    . Quantification method in [18F] fluorodeoxyglucose brain positron emission tomography using independent component analysis. Nucl Med Commun. 2005;26:995–1004.
    OpenUrlCrossRefPubMed
  48. 48.↵
    1. Michaud J-B,
    2. Tetrault M-A,
    3. Beaudoin J-F,
    4. et al
    . Sensitivity increase through a neural network method for LOR recovery of ICS triple coincidences in high-resolution pixelated-detectors PET scanners. IEEE Trans Nucl Sci. 2015;62:82–94.
    OpenUrl
  49. 49.↵
    1. Wang Y,
    2. Zhu W,
    3. Cheng X,
    4. Li D
    . 3D position estimation using an artificial neural network for a continuous scintillator PET detector. Phys Med Biol. 2013;58:1375–1390.
    OpenUrl
  50. 50.↵
    1. Shen D,
    2. Wu G,
    3. Suk H-I
    . Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–248.
    OpenUrlCrossRefPubMed
  51. 51.↵
    1. Han X
    . MR‐based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44:1408–1419.
    OpenUrlCrossRefPubMed
  52. 52.↵
    1. Leynes AP,
    2. Yang J,
    3. Wiesinger F,
    4. et al
    . Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59:852–858.
    OpenUrlAbstract/FREE Full Text
  • Received for publication September 28, 2017.
  • Accepted for publication January 25, 2018.
PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine: 59 (10)
Journal of Nuclear Medicine
Vol. 59, Issue 10
October 1, 2018
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning
Donghwi Hwang, Kyeong Yun Kim, Seung Kwan Kang, Seongho Seo, Jin Chul Paeng, Dong Soo Lee, Jae Sung Lee
Journal of Nuclear Medicine Oct 2018, 59 (10) 1624-1629; DOI: 10.2967/jnumed.117.202317

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning
Donghwi Hwang, Kyeong Yun Kim, Seung Kwan Kang, Seongho Seo, Jin Chul Paeng, Dong Soo Lee, Jae Sung Lee
Journal of Nuclear Medicine Oct 2018, 59 (10) 1624-1629; DOI: 10.2967/jnumed.117.202317
Twitter logo Facebook logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Bookmark this article

Jump to section

  • Article
    • Abstract
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • DISCLOSURE
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • This Month in JNM
  • PubMed
  • Google Scholar

Cited By...

  • Improving 18F-FDG PET Quantification Through a Spatial Normalization Method
  • Nuclear Medicine and Artificial Intelligence: Best Practices for Algorithm Development
  • PET/MRI, Part 2: Technologic Principles
  • Machine Learning in Nuclear Medicine: Part 2--Neural Networks and Clinical Aspects
  • Denoising of Scintillation Camera Images Using a Deep Convolutional Neural Network: A Monte Carlo Simulation Approach
  • Intelligent Imaging: Artificial Intelligence Augmented Nuclear Medicine
  • Generation of PET Attenuation Map for Whole-Body Time-of-Flight 18F-FDG PET/MRI Using a Deep Neural Network Trained with Simultaneously Reconstructed Activity and Attenuation Maps
  • Google Scholar

More in this TOC Section

  • Performance Evaluation of the uMI Panorama PET/CT System in Accordance with the National Electrical Manufacturers Association NU 2-2018 Standard
  • A Multicenter Study on Observed Discrepancies Between Vendor-Stated and PET-Measured 90Y Activities for Both Glass and Resin Microsphere Devices
  • Ultra-Fast List-Mode Reconstruction of Short PET Frames and Example Applications
Show more Physics and Instrumentation

Similar Articles

Keywords

  • deep learning
  • simultaneous reconstruction
  • crosstalk
  • denoising
  • quantification
SNMMI

© 2025 SNMMI

Powered by HighWire