Abstract
1843
Objectives Cochlear implants (CI) are now standard treatment for patients with high-frequency sensorineural hearing loss (partial deafness). Nevertheless, auditory performance following CI is variable. Central neuronal mechanisms involved in speech analysis in CI patients require further investigation.
Methods Four experienced unilateral CI users (3F,1M) (8-10 months post-op, >60% monosyllabic word recognition) and five normal hearing subjects (3F,2M) participated in an [15O]H2O PET study. Five rCBF PET measurements were obtained in each patient, including 3 presentations of monosyllabic words (Polish Pruszewicz test) and 2 silent conditions. In the control group, 3 additional emission scans were performed with the words first modified, so they resembled speech delivered by a CI. The study was performed on a Siemens Biograph128 mCT scanner. A bolus of approx. 550 MBq [15O]H2O in 2.5 ml of saline was injected for each emission scan. List-mode data acquisition (120 epochs) started 30s after injection, 5s before the raising phase of the radioactivity head curve. The time gap between sequential emission scans was 10 minutes. Images were reconstructed with a 3D OSEM+PSF algorithm. A full-factorial design implemented in SPM12 package was used for statistical analysis.
Results In CI users for contrast monosyllabic words vs silence (A) increased activation has been revealed in bilateral superior (STL) and middle temporal lobes, bilateral inferior frontal lobes, SMA and the thalamus (p<0.001 unc., voxel threshold=100). In the normal hearing group for the same contrast, activation has been demonstrated in bilateral STL and the thalamus (B). Contrast vocoded monosyllabic words vs silence (C) showed more extensive responses in STLs.
Conclusions In normal hearing individuals processing of normal and CI-like speech involves similar brain regions, with the latter requiring only more extensive auditory processing. CI users employ extensive parts of the auditory cortex, as well as additional regions involved with semantic decision tasks and articulatory control, when listening to regular speech. This is probably due to the degraded quality of the signal delivered by the implant but also deprivation-related adaptation of the CNS.