Elsevier

Medical Image Analysis

Volume 13, Issue 5, October 2009, Pages 730-743
Medical Image Analysis

Automated voxel-based 3D cortical thickness measurement in a combined Lagrangian–Eulerian PDE approach using partial volume maps

https://doi.org/10.1016/j.media.2009.07.003Get rights and content

Abstract

Accurate cortical thickness estimation is important for the study of many neurodegenerative diseases. Many approaches have been previously proposed, which can be broadly categorised as mesh-based and voxel-based. While the mesh-based approaches can potentially achieve subvoxel resolution, they usually lack the computational efficiency needed for clinical applications and large database studies. In contrast, voxel-based approaches, are computationally efficient, but lack accuracy. The aim of this paper is to propose a novel voxel-based method based upon the Laplacian definition of thickness that is both accurate and computationally efficient. A framework was developed to estimate and integrate the partial volume information within the thickness estimation process. Firstly, in a Lagrangian step, the boundaries are initialized using the partial volume information. Subsequently, in an Eulerian step, a pair of partial differential equations are solved on the remaining voxels to finally compute the thickness. Using partial volume information significantly improved the accuracy of the thickness estimation on synthetic phantoms, and improved reproducibility on real data. Significant differences in the hippocampus and temporal lobe between healthy controls (NC), mild cognitive impaired (MCI) and Alzheimer’s disease (AD) patients were found on clinical data from the ADNI database. We compared our method in terms of precision, computational speed and statistical power against the Eulerian approach. With a slight increase in computation time, accuracy and precision were greatly improved. Power analysis demonstrated the ability of our method to yield statistically significant results when comparing AD and NC. Overall, with our method the number of samples is reduced by 25% to find significant differences between the two groups.

Introduction

The measurement of cortical thickness from 3D magnetic resonance (MR) images can be used to aid diagnosis or perform longitudinal studies of a wide variety of neurodegenerative diseases, such as Alzheimer’s. Manual measurements are labour intensive and have a high variability. Accurate and automated software that maps the three dimensional cortical thickness of the entire brain is thus desirable.

The approaches used for cortical thickness estimation in the literature can be broadly categorised as mesh-based and voxel-based. One common aspect of these techniques is the need for an initial classification of the different brain tissue type, namely gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). Automatic classification and cortical thickness measurement from MR images are affected by artifacts such as intensity inhomogeneity, noise, and partial volume (PV) effect. PV introduces considerable errors in the measure due to the finite resolution of MR images (∼1 mm) compared to the size of the cortical structures (∼2−3 mm). Typically, two sulci banks in contact within a voxel may appear connected if the CSF is not detected within the GM. This results in erroneously high thickness estimates or topologically wrong surfaces of the brain.

Mesh based approaches use a deformable mesh to extract the inner and outer boundaries of the cortex before measuring thickness. Deformable model techniques fit closed parametric surfaces to the boundaries between regions (Pham et al., 2000), such as the inner (GM/WM) and outer (GM/CSF) boundaries of the cortex. The main advantage of deformable model is the smoothness constraint, which provides robustness to noise and false edges. They are also capable of operating in the continuous spatial domain and therefore achieving subvoxel resolution. However, deformable model are complex, incorporating methods to prevent self-intersection of surfaces or topology correction. Another disadvantage of some of these approaches is the need for manual interaction to initialize the model and to choose appropriate parameters. Some implementations impose thickness constraints on the cortex (Zeng et al., 1999, MacDonald et al., 2000) in order to model sulci. Fischl and Dale (2000) imposed a self-intersection constraint, forcing the surface to meet in the middle of sulci. A detailed comparison of three well established methods, CLASP (Kim et al., 2005), BrainVISA (Mangin et al., 1995), Freesurfer (Dale et al., 1999, Fischl and Dale, 2000) is presented in Lee et al. (2006). To our knowledge, CLASP (Kim et al., 2005) is the only approach to explicitly model the partial volume effect to fit the deformable mesh. It is however computationally intensive, with typical running time of over 20 h on a standard PC, as reported in Lee et al. (2006).

In contrast, voxel-based techniques (Hutton et al., 2008, Diep et al., 2007, Lohmann et al., 2003, Srivastava et al., 2003; Hutton et al., 2002) operate directly on the 3D voxel grid of the image, and are therefore more computationally efficient. Those methods are however less robust to noise and mis-segmentation as they typically lack the mechanisms required to assess and correct topological errors. They are also hampered by the MR limited resolution, in small and highly convoluted structures such as the GM sulci, where partial volume effects are preponderant.

Cortical thickness can be estimated using several metrics. The definition of thickness based on Laplace’s equation simulating the laminar structure of the cortex, first introduced by Jones et al. (2000), has gained wide acceptance. Haidar and Soul (2006) showed that the Laplacian approach is the most robust definition of thickness, compared to nearest neighbour and orthogonal projections, with respect to variations in MR acquisition parameters. Lerch and Evans (2005) performed cortical surface reconstruction and compared six cortical thickness metrics. They found that the coupled surfaces method was the most reproducible, followed by the Laplacian definition. However, the coupled surface method is highly dependant on the scheme used to construct the surface.

Whereas Jones et al. (2000) explicitly traced streamlines (Lagrangian approach), Yezzi and Prince (2003) proposed a more efficient method that involves solving a pair of first order linear partial differential equations (PDEs) without any explicit construction of correspondences (Eulerian approach). The major drawback of the Eulerian approach is the limited accuracy when estimating thickness, especially within thin structures since it is solved over a discrete grid. The initialization of the PDEs affects the accuracy, when the distance to the real boundary is not explicitly computed ignoring the PV effect. A hybrid Eulerian–Lagrangian approach was proposed by Rocha et al. (2007) to improve accuracy while preserving efficiency, but for subvoxel initialization at tissue boundaries a precomputed surface was required. For clinical applications, precision is of upmost importance. For example, the expected change in GM thickness during the early stages of Alzheimer’s disease has been shown to be less than 1mm in most brain regions (Lerch et al., 2005, Singh et al., 2006).

Building upon Yezzi and Prince (2003), we have improved the precision of the voxel-based thickness measurement by taking into account the PV coefficients at the GM boundaries to appropriately initialize the PDEs without previous upsampling and interpolation of the images. Our scheme can be considered as a combined Lagrangian–Eulerian approach: the boundaries are initialized with an explicit integration along the streamlines achieving subvoxel accuracy, and for the remaining grid points two PDEs are solved as in the Eulerian approach, preserving the computational efficiency. Unlike Rocha et al. (2007), the detection of the boundaries is performed within the gray matter partial volume map, without previous delineation of the surface. Rocha et al. (2007) additionally proposed the correction for divergent streamlines in thick and irregular structures, introducing a distance tolerance (λ). In cortical thickness this is unlikely to occur as the GM, which is a few mm thick, spans one or two voxels (for a typical full brain MR resolution in a clinical setting: 1 mm × 1 mm × 1.2 mm).

In the remainder of this paper, we first describe the method. We then validate the accuracy of our approach on synthetic data, and its reproducibility on real MR data. In the final section, we apply our cortical thickness estimation approach to a subset of the ADNI database, including 43 healthy elderly individuals or normal controls (NC), 53 mild cognitive impaired (MCI) and 22 Alzheimer’s disease patients (AD). We compared our method against the Eulerian approach of Yezzi and Prince (2003). The ability of our method to reach higher power when comparing two groups was demonstrated, with good computational efficiency (30 min in average on a standard PC).

Section snippets

Methods

The proposed method consists of several stages as depicted in Fig. 1: firstly, 3D T1-weighted MR images are classified into GM, WM and CSF in their original space using a priori probability maps registered with an affine followed by non-rigid registration (Section 2.1). Secondly, the fractional content of GM for the voxels along tissue interfaces is computed by modelling mixture of tissues and performing a maximum a posteriori classification (Section 2.2), which results in a GM partial volume

Experiments and results

This section describes the experiments performed to evaluate the proposed method. Our approach was to validate each step separately on both phantoms and real data, then test the reproducibility on the overall technique and, finally, show the results of a study on clinical data. We also compared the performance of our method with the Eulerian implementation as proposed by Yezzi and Prince (2003) ignoring the PV. All the algorithms were implemented in C++, using the open source ITK libraries.

Discussion and conclusion

We have described a novel voxel-based method for accurate and reproducible cortical thickness estimation, which uses partial volume classification to achieve subvoxel accuracy. The main contribution of our method is the preservation of the efficiency of the Eulerian approach while improving the accuracy through a better initialization. Unlike other approaches, all the calculations are performed on the discrete grid. The method is fully automatic and simple, using a ray casting technique in the

Acknowledgements

Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI; Principal Investigator: Michael Weiner; NIH Grant U01 AG024904). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and through generous contributions from the following: Pfizer Inc., Wyeth Research, Bristol-Myers Squibb, Eli Lilly and Company, GlaxoSmithKline, Merck & Co. Inc., AstraZeneca AB, Novartis

References (34)

  • J. Besag

    On the statistical analysis of dirty pictures

    Journal of the Royal Statistical Society

    (1986)
  • D. Collins et al.

    Design and construction of a realistic digital brain phantom

    IEEE Transactions on Medical Imaging

    (1998)
  • Diep, T.-M., Bourgeat, P., Ourselin, S., 2007. Efficient use of cerebral cortical thickness to correct brain MR...
  • F. Faul et al.

    G power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences

    Behavior Research Methods

    (2007)
  • B. Fischl et al.

    Measuring the thickness of the human cerebral cortex from magnetic resonance images

    Proceedings of the National Academy of Sciences of the United States of America

    (2000)
  • H. Haidar et al.

    Measurement of cortical thickness in 3D brain MRI data: validation of the Laplacian method

    Journal of Neuroimaging

    (2006)
  • Hutton, C., De Vita, E., Turner, R., 2002. Sulcal segmentation for cortical thickness measurements. In: Medical Image...
  • Cited by (0)

    1

    Data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (www.loni.ucla.edu/ADNI). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. ADNI investigators include (complete listing available at www.loni.ucla.edu/ADNI/Collaboration/ADNI_Citation.shtml).

    View full text