Characteristics of Time-Varying Models
PET model or implementation | Description/innovation | Assumptions | Limitations | Application |
---|---|---|---|---|
Extensions of 2-tissue-compartment neurotransmitter model (16,17) | First generation: shift in perspective; explicit aim of detecting dopamine fluctuations | Transient dopamine elevation modeled as square wave | Is impractical because of computationally demanding nonlinear estimation of many parameters | |
LSRRM (21) | First generation: extended version of MRTM, a linearization of simplified reference tissue model; accommodates non–steady state conditions; includes time-variant efflux, presumed to be due to increased competition by neurotransmitter with tracer | Temporal pattern of dopamine release fixed in shape: pure exponentials | Cannot represent the dopamine function as a fixed shape function, h(t), if the hypothesis is about the shape of the dopamine function | (25–33) |
ntPET (37) | First generation: description of competition between neurotransmitter and labeled tracer; set of 3 explicit mass balance equations coupled by bimolecular binding term made up of product of instantaneous concentration of available receptor sites and free competitor | Use of reference tissue | Is not linearizable because of nonlinear binding term; has 12 parameters, some of which were not identifiable; estimates multiple possible dopamine curves that could fit each dataset equally well | (38,41,42) |
Nonparametric ntPET (39) | Second generation: nonparametric singular value decomposition of PET time–activity curves; data-based method does not assume shape of dopamine curve | Requires training set | (40,43) | |
lp-ntPET (42) | Second generation: linearized version of ntPET model; same operational equation as Equation 1 (LSRRM) except that h(t) is allowed to vary in shape; multiple choices of h(t) are represented as basis functions | Library of basis functions created with discretized parameters that span realistic range | Possibly overfits noise; has possible overly conservative model selection criteria | (42,47,48,50,52) |
Denoising as preprocessing (55) | Third generation: controls FPR—feed-forward neural network that was trained to denoise PET time–activity curves by predicting noiseless time–activity curve | Requires training set | ||
Corrected model selection (54) | Third generation: controls FPR—adaptive model comparison metrics that control FPR regardless of number of basis functions used | Requires simulation of null data for every application | (33,56,72) | |
Direct reconstruction (57) | Third generation: controls FPR—noise is well known (Poisson) in sinogram domain; consequence is reduction of FPR | Assumes same kinetic model at all locations | ||
Machine learning (58) | Third generation: controls FPR—preselects voxels most likely to contain activation | Requires training set | ||
Monte Carlo modeling/F-statistic correction (59) | Third generation: improves sensitivity—corrects F distribution for errors introduced by partial volume | Needs to simulate null and activated data for every application | ||
Personalized neural nets (60) | Third generation: improves sensitivity—differentiates noisy time–activity curves with and without effect of dopamine release; outperformed F-test in identified real activations | Needs to simulate null and activated data for every application | ||
Residual space analysis (61) | Third generation: improves sensitivity—converts time–activity curves into residual curves, defining canonic baseline curve (no effect of activation) and subtracting it from each voxel time–activity curve | Requires sufficient nonactivated voxels to serve as baseline | ||
b-ntPET (64) | Beyond third generation: Bayesian method, uses Markov chain Monte Carlo sampling; produces posterior distribution of model parameters | Validity of prior distributions | Requires analytic expression of likelihood function; convergence is slow; not easily extended to voxels | (64) |
PET-ABC (65,66) | Beyond third generation: simplifies Bayesian computation; is extensible to voxel level; produces probability-of-activation maps for individuals | Validity of prior distributions | Generates approximate posterior distribution | (56,66,67) |