Abstract
1731
Objectives: MIP volume-rendered PET data are routinely utilized and valuable for image interpretation. However, fusing the MIP PET data with MIP rendered CT data has been limited by artifacts from bed and bed linen which obscure potentially vital data in the renderings. Further, limitations include an inability to interact / manipulate / navigate the fused MIP data in real time. Our aim was to develop an automated approach to remove the bed/linen artifacts and a viewer to interact with and manipulate the fused renderings. Methods: We used the fast in-built rendering and fusion capabilities of consumer graphics cards (1.8-Ghz Intel Centrino notebook; 64 Mb ATI Radeon 9600 graphics card; 1 Gb of RAM) and texture-based rendering on aligned PET and CT data individually; these data were then fused by voxel blending. Prior to rendering, the CT bed/linen artifact was automatically removed with adaptive thresholding and template-matching to a bed template. Results: The developed algorithm successfully removes the artifacts from the bed/linen from MIP rendered CT and fused data to achieve artifact-free renderings. These renderings can then be subject to interaction / manipulation as follows: rotation, scale, LUT, fusion ratio, plane clipping and brightness/contrast. MIP rendered whole-body PET/CT data animations with dimensions of 512×512×261 voxels, and a 600×600 window had a response time of 2 to 5 frames per second. Conclusions: Our viewer allows interactive MIP rendering of PET/CT with automated removal of CT bed/linen. We suggest that this viewer will improve image interpretation of PET/CT studies by allowing the MIP PET data to be placed into an anatomical context, which is free of artifacts.
Research Support (if any): ARC and PolyU/UGC grants.
- Society of Nuclear Medicine, Inc.