Abstract
P1102
Introduction: Radiopharmaceutical therapy (RPT) has shown promising outcomes in the palliative treatment of cancer patients. Personalized dosimetry in theranostics is essential to estimate the radiation doses delivered to tumors and organs at risk. In this regard, two or preferably three quantitative SPECT images are required to yield the time-integrated activity (TIA) maps obtained by fitting a curve to model the pharmacokinetics of the radiotracer in the patient’s body. Registration of SPECT/CT images among the time points is necessary to extract the time-activity curve (TAC) required for both organ-level and voxel-wise dosimetry. As such, these images should be acquired by covering the same axial field-of-view (FOV) covered by commonly used ~40 cm/bed SPECT/CT cameras. Yet, in reality, there may be mismatches (figure 1a). We evaluated this mismatch in patients treated by Lu-177 RPT and developed a deep neural network to tackle this problem.
Methods: We evaluated the prevalence and severity of FOV mismatch by including 23 patients (60 treatment sessions and 180 time-point acquisitions) treated with Lu-177. The patients underwent SPECT/CT acquisitions at 3 time-points on the day of injection (D0), the day after injection (D1), and 3 days after injection (D3) of 7.59 GBq on average of Lu-177 PSMA or Lu-177 Dotatate. The D1 time-point was considered as reference, and D0 and D3 images were registered to D1 using Elastix toolbox. Subsequently, the common axial FOV among these time-points was estimated, and through comparison to the typical axial FOV of 39.8 cm/bed, the missed FOV due to mismatch between the scan ranges was calculated. By using pre-trained segmentation models, we segmented 5 organs including the clavicle, lungs, liver, bladder, and femoral heads on 783 total-body CT images acquired on two Siemens SPECT/CT cameras (341 and 442) as anatomical markers. The axial segmentations were transferred to the localizer image according to the position information stored in the DICOM header after visual assessment and amendment if necessary. The localizer image and organ segmentations were used for training a transformer deep learning Swin UNETR architecture with an input image size of 256 × 512. The training was continued for 200 epochs with the Adam optimizer, 5e-4 learning rate, and 90% decay executed in Python MONAI platform version 1.1.0. The data were split randomly and 20% of images from each camera were used as the test set. The model performance was estimated in terms of Dice coefficient and accuracy of boundaries selected using the segmented organs on the localizer image.
Results: Mismatches greater than 3 cm of FOV were observed in ~35 % of the cases, with an average mismatch of 4.3 cm (range 3.1 to 6.0 cm). The average Dice score for 5 organs of the clavicle, lungs, liver, bladder, and femoral heads were 0.72± 0.08, 0.92 ± 0.08, 0.81 ± 0.11, 0.71 ± 0.14, and 0.79 ± 0.10, respectively. Figure 1b shows examples of the segmented image and the boundaries of scan ranges. Figure 1c depicts the average missed FOV using our segmentation and scan range selection model. The average error in scan range selection by our proposed model was 0.02 and 1.37 cm in terms of distance and absolute distance from the standard of reference boundaries.
Conclusions: Serial imaging is essential for personalized dosimetry in RPT, and FOV mismatches among serial acquisitions might impact the accuracy of image registration, thus causing errors in the calculation of TIA maps which impacts the accuracy of dose calculations. We developed a deep learning-guided approach to overcome the problem of axial FOV mismatch through automatic scan range selection by suggesting reproducible scan ranges for serial acquisitions with a reasonable accuracy.