Abstract
241883
Introduction: PET scan provides molecular level sensitivity for detecting abnormalities in brain functions. Higher cost is often an issue because of the multiple radiotracers used for different pathologies and the requirements for structural information that can only be complemented by additional MR scans. Some studies employed computational methods to generate structural MR based on PET images that contain structurally relevant information, such as FDG-PET, for addressing the absence of MR images to utilize PET scans unpaired with MR and reduce the clinical cost. Nevertheless, widely used models like convolutional neural networks are not useful for many other radiotracers, i.e., 18F-Florbetaben PET (FBB-PET), whose uptake patterns are not directly in line with brain structures. This study aims to overcome this problem by generating structural MR images from FBB-PET images using an unsupervised generative architecture with domain adaptation (DA). We propose a model that can supplement structural MR modality for the higher accuracy in quantitative PET analyses without procuring paired reference images.
Methods: We employed domain adversarial neural network approach coupled with CycleGAN, which enables generating structural MR from FBB-PET unsupervised by the paired reference (ground truth). Generating structural MR based on FBB-PET images is challenging because the FBB-PET uptake aligns less with brain structures than FDG-PET. We tackled this issue by creating the pseudo FDG-PET from FBB-PET using CycleGAN and applying the DA to minimize the differences between data features represented in acquired (source) and pseudo (target) FDG-PET domains. The source domain was trained using the acquired FDG-PET and CT input to synthesize MR, which provided brain structural information to the target domain. Training of the target domain (pseudo FDG-PET + CT) aimed at reducing the representation differences in domain features compared to the source domain. In training DA, encoders (E) were trained to encode acquired or pseudo FDG-PET images into the feature domains, and the generator (G) was trained to generate structural MR images from the source. Feature domains were aligned via adversarial learning during which the discriminator (D) competed with source and target domain encoders trying to distinguish between the two. The trained target encoder was able to generate source-aligned features within the target domain to synthesize MR based on the pseudo FDG-PET originated from FBB-PET. The proposed model is shown in Figure 1. The whole dataset consisted of 104 FBB-PET and a separate set of 104 FDG-PET delayed scans. We used 84 for training and 20 for testing from each.
Results: The currently proposed model successfully aligned the feature domains of pseudo FDG-PET created using FBB-PET with the acquired FDG-PET, which synthesized structural MR images that showed higher visual similarity than when MR images were synthesized using CT and 1) FDG-PET transformed from FBB-PET applying CycleGAN without DA, and 2) FBB-PET directly as the input (Figure 2a). It generated anatomical details that resemble the reference MR images apparently better than the alternatives (Figure 2b). The accuracy of gray matter segmentation was also the highest using the proposed model among the compared, demonstrating the higher anatomical consistency with the ground truth than the others (Figure 2c).
Conclusions: We suggest an unsupervised deep learning approach to synthesize structural MR from FBB-PET/CT via domain adversarial neural network. The proposed model outperformed the compared alternatives in MR synthesis according to evaluations of the visual similarity and the accuracy in mimicking structural details of the brain. The proposed model seems to be advantageous for synthesizing cross-modality images despite the lack of reference acquisitions, which may be implemented for building more robust generative models for generalized employment of the other PET tracers.