TABLE 2.

Characteristics of Time-Varying Models

PET model or implementationDescription/innovationAssumptionsLimitationsApplication
Extensions of 2-tissue-compartment neurotransmitter model (16,17)First generation: shift in perspective; explicit aim of detecting dopamine fluctuationsTransient dopamine elevation modeled as square waveIs impractical because of computationally demanding nonlinear estimation of many parameters
LSRRM (21)First generation: extended version of MRTM, a linearization of simplified reference tissue model; accommodates non–steady state conditions; includes time-variant efflux, presumed to be due to increased competition by neurotransmitter with tracerTemporal pattern of dopamine release fixed in shape: pure exponentialsCannot represent the dopamine function as a fixed shape function, h(t), if the hypothesis is about the shape of the dopamine function(2533)
ntPET (37)First generation: description of competition between neurotransmitter and labeled tracer; set of 3 explicit mass balance equations coupled by bimolecular binding term made up of product of instantaneous concentration of available receptor sites and free competitorUse of reference tissueIs not linearizable because of nonlinear binding term; has 12 parameters, some of which were not identifiable; estimates multiple possible dopamine curves that could fit each dataset equally well(38,41,42)
Nonparametric ntPET (39)Second generation: nonparametric singular value decomposition of PET time–activity curves; data-based method does not assume shape of dopamine curveRequires training set(40,43)
lp-ntPET (42)Second generation: linearized version of ntPET model; same operational equation as Equation 1 (LSRRM) except that h(t) is allowed to vary in shape; multiple choices of h(t) are represented as basis functionsLibrary of basis functions created with discretized parameters that span realistic rangePossibly overfits noise; has possible overly conservative model selection criteria(42,47,48,50,52)
Denoising as preprocessing (55)Third generation: controls FPR—feed-forward neural network that was trained to denoise PET time–activity curves by predicting noiseless time–activity curveRequires training set
Corrected model selection (54)Third generation: controls FPR—adaptive model comparison metrics that control FPR regardless of number of basis functions usedRequires simulation of null data for every application(33,56,72)
Direct reconstruction (57)Third generation: controls FPR—noise is well known (Poisson) in sinogram domain; consequence is reduction of FPRAssumes same kinetic model at all locations
Machine learning (58)Third generation: controls FPR—preselects voxels most likely to contain activationRequires training set
Monte Carlo modeling/F-statistic correction (59)Third generation: improves sensitivity—corrects F distribution for errors introduced by partial volumeNeeds to simulate null and activated data for every application
Personalized neural nets (60)Third generation: improves sensitivity—differentiates noisy time–activity curves with and without effect of dopamine release; outperformed F-test in identified real activationsNeeds to simulate null and activated data for every application
Residual space analysis (61)Third generation: improves sensitivity—converts time–activity curves into residual curves, defining canonic baseline curve (no effect of activation) and subtracting it from each voxel time–activity curveRequires sufficient nonactivated voxels to serve as baseline
b-ntPET (64)Beyond third generation: Bayesian method, uses Markov chain Monte Carlo sampling; produces posterior distribution of model parametersValidity of prior distributionsRequires analytic expression of likelihood function; convergence is slow; not easily extended to voxels(64)
PET-ABC (65,66)Beyond third generation: simplifies Bayesian computation; is extensible to voxel level; produces probability-of-activation maps for individualsValidity of prior distributionsGenerates approximate posterior distribution(56,66,67)