Abstract
2382
Objectives Usually unconscious small animals are used in pre-clinical studies to avoid animal motion. However, anesthesia changes the biodistribution of the radiotracers. A project to for acquisition of PET images from freely moving, conscious mice is presented, which uses multiple cameras-based 3D animal tracking in combination with multi-wire proportional chamber-based small animal PET scanner with a large field of view.
Methods 1. A transparent animal chamber was made which allows full freedom of movement to the mice in the scanner. 2. The camera system was calibrated and corrected for optical errors such as lens distortion, light refraction at the chamber walls using hardware phantoms. 3. Using the images from multiple cameras, the 3D position was determined by employing triangulation methods. 4. Using segmentation techniques based on Otsu's automatic thresholding method, reflecting markers attached to the animals were tracked with video cameras to estimate the motion of the animals. 5. The methods were validated using a hardware phantom and ray tracing based software simulation data.
Results 1. The initial experiments with hardware phantoms showed that a 3D position estimation accuracy of about 1mm was achieved using a rough method for correction of lens distortion. 2. Software simulation results showed that the effect of light refraction at chamber walls is significant. Ignoring this effect can lead to errors of up to 2.4 mm in 3D position estimation. 3. Statistical analysis of an initial experiment with a wild type mouse showed that the mouse head moved with less than 97 mm/sec for over 92% of the time. This result was used to determine the frame rate of the video cameras at ≥100 fps. 4. It was further established, that at least a resolution of 800x600 pixels is required using a wide angle lens of 87°x67°.
Conclusions The initial results of the multiple camera-based tracking system are presented. They show that an accuracy of 1mm in position estimation is currently achieveable