Abstract
3292
Introduction: Accuracy of images acquired on PET imaging systems is usually assessed using a set of realistic phantoms. For brain PET studies, 3D anthropomorphic brain phantoms have been developed. One of the most popular is the Hoffman phantom, and more recently, a phantom with a realistic head contour and attenuation properties has been developed by Iida et al. However, these phantoms do not allow for an easy change in contrast (i.e., Iida phantom only allows the filling of a gray matter compartment). Although other phantoms contain separate chambers which allow different contrasts, these chambers are usually separated by walls which do not realistically represent the brain, and do not allow for easy change in contrast. The objective of this work is to develop an easy and flexible technique of creating phantom data with different contrasts, which is needed to test the quantitative accuracy of iterative reconstruction algorithms. This is accomplished by using multiple closely related phantoms, i.e., with minimal overall attenuation differences, with different fillable chambers. List mode data are acquired independently for each phantom, and then re-sampled (to produce arbitrary contrast) and merged into one list mode file for reconstruction. Here, we test this approach with simulated data based on the Iida phantom.
Methods: The proposed method was validated using data sets from 2 simulated objects: gray matter (GM) only, and GM combined with white matter (GM+WM) (Fig.1A,B), simulated with the same activity concentration (18F: ~105 kBq/mL) and same attenuation properties, using MOLAR for the HRRT configuration. Resolution was modeled with an isotropic Gaussian. Randoms were not included, but separate simulations were performed without and with scatter, based on real data. Three different GM:WM contrasts were investigated: 2:1, 4:1, and 8:1. To create these contrasts, the GM+WM list mode file was down-sampled by a factor of ‘1/(Contrast-1)’, and then merged with the GM data set. Images were reconstructed with MOLAR with appropriate corrections (i.e., attenuation, resolution modeling, and scatter corrections when applicable) for 2 iterations using 30 subsets. For evaluation, the percent difference between reconstructed images, derived from our proposed method of merging list mode data sets, and those of the 2 separately reconstructed simulated objects (Fig.1C,D) added together, with the GM+WM image weighted according to the contrast, was calculated using the latter images as reference, for a set of ROIs. The ROIs were derived from the AAL template registered to the phantom using SPM12.
Results: Fig.2 shows the reconstructed images from the merged list mode (top row) and the weighted sum of the separate reconstructions at matched contrast (bottom row) with and without scatter contributions, showing visual consistency. To quantify this similarity, Table 1 shows the percent differences between images for selected GM ROIs. The differences are minimal between our proposed method and the weighted summed images, over all contrasts, but very slightly higher when including scatter: AVG±SD (across 8 ROIs) no scatter/with scatter: (2:1) 0.04±0.21%/0.41±0.20%, (4:1) 0.15±0.48%/0.69±0.40%, (8:1) 0.12±0.49%/0.45±0.29%.
Conclusions: These preliminary results show that merging of separate list mode data sets offers the potential to easily customize phantom studies with different contrasts, without compromising quantitative accuracy. For our next steps, careful consideration will be given to accurate corrections for random events and deadtime as well as the effects of slight differences in attenuation properties on attenuation correction and scatter correction accuracy. In addition, physical high-resolution GM-only and GM+WM phantoms are under development for further testing. This approach to produce phantom data with arbitrary contrast will allow thorough testing of the accuracy of iterative reconstruction algorithms.