Skip to main content

Main menu

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Permissions
    • Advertisers
    • Continuing Education
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Editorial Contact
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI

User menu

  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart

Search

  • Advanced search
Journal of Nuclear Medicine
  • SNMMI
    • JNM
    • JNMT
    • SNMMI Journals
    • SNMMI
  • Subscribe
  • My alerts
  • Log in
  • Log out
  • My Cart
Journal of Nuclear Medicine

Advanced Search

  • Home
  • Content
    • Current
    • Ahead of print
    • Past Issues
    • JNM Supplement
    • SNMMI Annual Meeting Abstracts
  • Subscriptions
    • Subscribers
    • Rates
    • Journal Claims
    • Institutional and Non-member
  • Authors
    • Submit to JNM
    • Information for Authors
    • Assignment of Copyright
    • AQARA requirements
  • Info
    • Permissions
    • Advertisers
    • Continuing Education
    • Corporate & Special Sales
  • About
    • About Us
    • Editorial Board
    • Editorial Contact
  • More
    • Alerts
    • Feedback
    • Help
    • SNMMI Journals
  • Follow SNMMI on Twitter
  • Visit SNMMI on Facebook
Research ArticleClinical Investigations

Comparison of Fully Automated Computer Analysis and Visual Scoring for Detection of Coronary Artery Disease from Myocardial Perfusion SPECT in a Large Population

Reza Arsanjani, Yuan Xu, Sean W. Hayes, Mathews Fish, Mark Lemley, James Gerlach, Sharmila Dorbala, Daniel S. Berman, Guido Germano and Piotr Slomka
Journal of Nuclear Medicine February 2013, 54 (2) 221-228; DOI: https://doi.org/10.2967/jnumed.112.108969
Reza Arsanjani
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yuan Xu
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sean W. Hayes
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mathews Fish
2Oregon Heart and Vascular Institute, Sacred Heart Medical Center, Springfield, Oregon
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mark Lemley Jr.
2Oregon Heart and Vascular Institute, Sacred Heart Medical Center, Springfield, Oregon
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
James Gerlach
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Sharmila Dorbala
3Division of Cardiology, Noninvasive Cardiovascular Imaging Section, Department of Radiology, Brigham and Women’s Hospital, Boston, Massachusetts; and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daniel S. Berman
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
4David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Guido Germano
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
4David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Piotr Slomka
1Departments of Imaging and Medicine and Cedars-Sinai Heart Institute, Cedars-Sinai Medical Center, Los Angeles, California
4David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • PDF
Loading

Abstract

We compared the performance of fully automated quantification of attenuation-corrected (AC) and noncorrected (NC) myocardial perfusion SPECT (MPS) with the corresponding performance of experienced readers for detection of coronary artery disease (CAD). Methods: Rest–stress 99mTc-sestamibi MPS studies (n = 995; 650 consecutive cases with coronary angiography and 345 with likelihood of CAD < 5%) were obtained by MPS with AC. The total perfusion deficit (TPD) for AC and NC data was compared with the visual summed stress and rest scores of 2 experienced readers. Visual reads were performed in 4 consecutive steps with the following information progressively revealed: NC data, AC + NC data, computer results, and all clinical information. Results: The diagnostic accuracy of TPD for detection of CAD was similar to both readers (NC: 82% vs. 84%; AC: 86% vs. 85%–87%; P = not significant) with the exception of the second reader when clinical information was used (89%, P < 0.05). The receiver-operating-characteristic area under the curve (ROC AUC) for TPD was significantly better than visual reads for NC (0.91 vs. 0.87 and 0.89, P < 0.01) and AC (0.92 vs. 0.90, P < 0.01), and it was comparable to visual reads incorporating all clinical information. The per-vessel accuracy of TPD was superior to one reader for NC (81% vs. 77%, P < 0.05) and AC (83% vs. 78%, P < 0.05) and equivalent to the second reader (NC, 79%; and AC, 81%). The per-vessel ROC AUC for NC (0.83) and AC (0.84) for TPD was better than that for the first reader (0.78–0.80, P < 0.01) and comparable to that of the second reader (0.82–0.84, P = not significant) for all steps. Conclusion: For detection of ≥70% stenoses based on angiographic criteria, a fully automated computer analysis of NC and AC MPS data is equivalent for per-patient and can be superior for per-vessel analysis, when compared with expert analysis.

  • automated quantification
  • coronary artery disease
  • myocardial perfusion SPECT
  • total perfusion deficit

Myocardial perfusion SPECT (MPS) is the most common noninvasive stress imaging modality of choice for the diagnosis of coronary artery disease (CAD) (1). Prior studies have demonstrated that quantitative analysis can supplement visual analysis (2–4). Quantitative analysis is also more reproducible than visual analysis (5,6). However, despite these advantages, it is currently recommended that quantitative analysis be used only as an adjunct to visual analysis (7), based on the inability of the software to explicitly differentiate between perfusion defect and artifact (7). Furthermore, multiple prior studies have shown that attenuation-corrected (AC) MPS assessed either visually or by software analysis resulted in improved diagnostic accuracy as compared with noncorrected (NC) MPS (8–10). However, the differences between expert visual and automated analysis of NC and AC data have not been comprehensively evaluated. The aim of this study was to compare the performance of a fully automated analysis of combined NC and AC MPS (10) for detection of obstructive CAD based on angiographic criteria with the visual scoring of experienced readers using NC images, AC images, results of computer analysis, and clinical information in a progressive and stepwise fashion.

MATERIALS AND METHODS

Patient Population

The subjects who were referred to the Nuclear Medicine Department of Sacred Heart Medical Center from March 1, 2003, to December 31, 2006, for rest and stress electrocardiography-gated NC and AC MPS were consecutively selected (10). All patients with a prior history of CAD or significant valve disease were excluded. MPS and coronary angiography had to be performed within 60 days without a significant intervening event. The low-likelihood studies were obtained from patients who performed an adequate treadmill stress test, did not have correlating coronary angiography available, but had less than a 5% likelihood of CAD using the Diamond and Forrester criteria based on age, sex, symptoms, and electrocardiography response to adequate treadmill stress testing (11). On the basis of these selection criteria, we identified 650 patients with correlative angiography as described above and 345 patients with a low likelihood of CAD. The clinical characteristics of the 2 groups are listed in Table 1. The study protocol was approved by the Institutional Review Board.

View this table:
  • View inline
  • View popup
TABLE 1

Baseline Characteristics of Patients

Image Acquisition and Reconstruction Protocols

Standard 99mTc-sestamibi rest–stress protocols were used as previously described with treadmill testing or adenosine infusion with low-level exercise (12). Vertex dual-detector scintillation cameras with low-energy high-resolution collimators and Vantage Pro AC hardware and software (Philips) were used to acquire MPS.

Tomographic reconstruction was performed by AutoSPECT (13) and Vantage Pro programs (Philips). Emission images were automatically corrected for nonuniformity, radioactive decay, center of rotation, and motion during acquisition. Filtered backprojection and Butterworth filters were applied to obtain the NC MPS with an order of 10 and cutoff of 0.50 for rest MPS and an order of 5 and cutoff of 0.66 for stress MPS. The attenuation maps and the emission data were used to reconstruct the AC images with a maximum-likelihood expectation maximization algorithm incorporating scatter correction and depth-dependent resolution compensation.

Contour Adjustment and Quality Control

Automatically derived left-ventricular (LV) contours were visually verified by an experienced technician from Cedars-Sinai Medical Center, and if necessary the valve plane and LV mask were manually adjusted. Additionally, automated software was run without any user intervention (fully unsupervised) and with adjustment performed by a second technologist from Sacred Heart Medical Center with no significant experience with the software, who was provided with simple training instructions. Both technologists were aided by an automated method for quality control of LV MPS contours (14). This method derives 2 parameters: the shape flag to detect the mask-failure cases, and the valve plane flag to detect the valve plane over- or undershooting. These quality control flags were used as a guide by the technologists (more careful assessment of contours with shape-quality control flag > 3 and valve plane flag > 0.37 or < 0.28) (14); however, the contours were adjusted using visual judgment when deemed appropriate.

Automated Analysis

TPD for NC images was computed with a previously developed simplified approach (15). For the AC results, we derived combined NC and AC severity, which integrates NC and AC data for improved accuracy (10), similar to the visual AC analysis for which readers combine NC and AC data. In short, hypoperfusion severities for NC and AC data were derived, and the combined NC and AC severity was determined at each polar map location by averaging NC and AC severities computed separately from NC and AC normal limits. Subsequently, combined NC + AC TPD was computed by integrating the NC + AC severities below polar map normal limits. A threshold of ≥3% on a per-patient basis for TPD was considered abnormal for both NC and combined NC + AC analysis. The term AC-TPD analysis in the text and figures refers to this combined NC + AC analysis which was previously shown superior to standalone TPD analysis of AC studies (10). The ischemic TPD (ITPD) measurements were calculated as stress TPD minus rest TPD (NC-ITPD and AC-ITPD), and an ITPD value of ≥2% on a per-patient basis was considered abnormal (10). Partial NC-TPD and AC-TPD scores for each vascular territory were also obtained, with ≥2% being considered abnormal (10).

Visual Analysis

The visual interpretation of MPS images was based on a 17-segment model (16). MPS images were scored independently by 2 experienced and board-certified cardiologists (reader 1 with 30 y and reader 2 with 10 y of clinical experience in nuclear cardiology), using a 5-point scoring system (0, normal; 1, mildly decreased; 2, moderately decreased; 3, severely decreased; and 4, absence of segmental uptake). Visual reading was performed in 4 consecutive steps with the following information progressively revealed to the readers. During the first step (V1), the readers scored both the NC stress and rest images using perfusion data, raw projection data, and gated function data (17). During the second step (V2), the readers could rescore the stress and rest studies using additional AC data. During the third step (V3), the readers also had access to the perfusion quantification results obtained by the software. Finally, during the fourth step (V4), the readers were additionally provided clinical information, including age, cardiac risk factors, type of stress, and clinical and electrocardiography responses to stress. Throughout each step, observers could also modify the default assignment of segments to the specific vascular territory as is possible in QPS/QGS software. Subsequently, summed stress scores (SSS), summed rest scores, and summed difference scores (SDS) were calculated from the17-segment scores. All scores were recorded automatically in the batch files, eliminating manual transfer. An SSS of ≥4 and SDS of ≥2 were considered abnormal (10). Partial summed scores for each vascular territory were also obtained, with an SSS of ≥2 being considered abnormal for any territory (10).

Conventional Coronary Angiography

Conventional coronary angiography was performed according to standard clinical protocols within 60 days of the myocardial perfusion examination. All coronary angiograms were visually interpreted by an experienced cardiologist. A stenosis of ≥50% for the left main or ≥70% for other coronary arteries was considered the gold standard for detection of CAD. A secondary analysis was performed for which any coronary artery stenosis of ≥50% was considered significant.

Statistical Analysis

Continuous variables were expressed as the mean ± SD, and categoric variables were expressed as percentages. Interobserver agreement between the 2 readers was compared using a Bland–Altman test and κ-test. The total estimate of agreement, defined as total cases in which the tests agree, was compared between automated and visual reads and between the 2 visual readers. We also compared the positive percentage agreement, defined as total positive cases in which the tests agree, and negative percentage agreement, defined as total negative cases in which the tests agree (18). The overall automated and visual total agreements, positive percentage agreements, negative percentage agreements, sensitivity, specificity, and accuracy were compared using a z test. Automated NC data were compared with visual NC data only (V1), whereas automated AC results were compared with visual AC data (V2–V4).

For all analyses, P values of less than 0.05 were considered statistically significant. Receiver-operating-characteristic (ROC) curves were analyzed to evaluate the ability of TPD versus visual scoring for forecasting ≥70% and ≥50% stenoses of the coronary arteries. The differences between the ROC AUC were compared by the Delong method (19).

RESULTS

Agreement Between Automated and Visual Reads

Table 2 compares the diagnostic agreement (total positive and negative percentage agreement) between the 2 readers and between each reader and automated quantification. Overall, there was high agreement between the 2 readers (87%–91%) and between each reader and the automated results (84%–89%). The total agreement significantly improved (by at least 3% for both readers and the software) with the addition of AC data in comparison to NC data. Figure 1 demonstrates the number of cases for which the diagnosis was changed during each of the steps. The addition of AC data changed the diagnosis in more than 8% of cases for both automated and visual reads. The interobserver correlations and κ-agreements are shown in Table 3. Interobserver κ-agreement improved from 0.77 to 0.82 (P = 0.006) with the addition of AC images.

View this table:
  • View inline
  • View popup
TABLE 2

Diagnostic Agreement Between Automated Analysis and Each Individual Reader and Interobserver Agreement

View this table:
  • View inline
  • View popup
TABLE 3

Interobserver Agreement Comparison Between 2 Readers at Each Visual Step (V1–V4)

FIGURE 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 1.

Number of cases with changed diagnosis in each subsequent step for both automated and visual analysis. *Indicates significant difference, compared with prior step (P < 0.05). Clin. = clinical; comp = computer.

Software Versus Reader: Per-Patient Diagnostic Performance

Figure 2 compares diagnostic performance for stress NC-TPD, stress AC-TPD, and the 2 visual readers for detection of ≥70% stenosis on a per-patient basis. In comparison with the automated analysis, for NC data the specificities were higher for both readers, the sensitivity was lower for 1 reader, and overall accuracy was similar for both readers. The accuracy and specificity for all the steps with AC data (V2–V4) were similar to the AC-TPD analysis, with the exception of the higher accuracy of reader 2 at V4 incorporating AC, computer analysis, and clinical analysis (89% vs. 86%, P < 0.05). For reader 1, the V3 step incorporating AC and computer analysis increased sensitivity (84% vs. 89%, P < 0.05). Similar results were noted when comparing NC-TPD, AC-TPD, and visual reads from both readers for detection of ≥50% CAD on a per-patient basis. The specificity and accuracy of the automated analysis significantly improved (≥4.0%) for detection of ≥70% stenosis with the addition of AC data on a per-patient basis. The accuracy for reader 1 did not improve at step V4; however, the sensitivity and accuracy for reader 2 improved significantly, by 5.4% and 2.5%, respectively, when the clinical information (V4) was incorporated. There were 25 cases of ≥70% stenosis for which both expert readers agreed and were correct whereas the automated analyses were incorrect. On the other hand, there were 8 cases for which the automated analysis was correct, whereas both experts were incorrect.

FIGURE 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 2.

Diagnostic performance of automatic analysis vs. visual analysis for detection of ≥70% coronary artery lesions on per-patient basis (number of patients with ≥70% stenotic lesion on cardiac catheterization = 463). Automated analysis was also compared with visual analysis (NC vs. V1 and AC vs. V2–V4).

The ROC curves comparing NC-TPD, AC-TPD, and visual reads are shown in Figure 3. The ROC-AUCs were significantly higher (P < 0.01) for NC-TPD and AC-TPD than for NC (V1) and AC (V2) reads, respectively. The ROC AUCs for both visual readers at the final step (V4) were similar to AC-TPD analysis (0.91 vs. 0.92, P = not significant). Similarly, using a ≥50% stenosis cutoff, the ROC AUC for NC-TPD and AC-TPD was significantly higher (P < 0.01) than the visual NC and AC reads, and the visual ROC AUCs for the final read (V4) for both readers were also similar to automated analysis.

FIGURE 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 3.

ROC curves comparing automated vs. visual reads on per-patient basis (2 readers for detection of ≥70% stenosis). Automated analysis was also compared with visual analysis (NC vs. V1 and AC vs. V2–V4).

Software Versus Readers: Per-Vessel Diagnostic Performance

On a per-vessel basis, the diagnostic performance of the automated analysis was comparable to the visual analysis of reader 2, but the accuracies of NC-TPD and AC-TPD were superior to the analysis of reader 1 (P < 0.05) (Fig. 4). In individual territories, the diagnostic accuracy of the automated analysis for detection of ≥70% stenosis based on angiographic criteria was higher than reader 1 and equivalent to reader 2 in all steps (V1–V4) for left anterior descending and left circumflex arteries (supplemental data; supplemental materials are available online only at http://jnm.snmjournals.org). In addition, NC-TPD analysis of the right coronary artery was more accurate than both readers using NC data only (82% vs. 77%–78%, P < 0.05). Per-vessel diagnostic accuracy did not improve with the addition of computer and clinical analysis. Neither of the readers had higher diagnostic accuracy than the computer software in any of the territories. The addition of AC information to NC improved the diagnostic accuracy for automated analysis (83% vs. 81%, P < 0.05), which was similar to the pattern seen for per-patient analysis.

FIGURE 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 4.

Diagnostic performance of automatic analysis vs. visual analysis for detection of ≥70% coronary artery lesions on per-vessel basis in all vessels. Automated analysis was also compared with visual (NC vs. V1 and AC vs. V2–V4). *Indicates statistically significant difference compared with prior step (P < 0.05). Green signifies that visual analysis was better than automated, while red signifies that automated analysis was better than visual analysis (P < 0.05).

The ROC curves comparing automated and visual measurements on a per-vessel basis (≥70% stenosis) are shown in Figure 5. The ROC AUC for NC-TPD and AC-TPD was significantly higher (P < 0.01) than the visual NC and AC reads (V1 and V2, respectively) for reader 2 and was comparable to reader 1.

FIGURE 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIGURE 5.

ROC curves comparing automated vs. visual reads on per-vessel basis for both readers for detection of ≥70% stenosis. Automated analysis was also compared with visual (NC vs. V1 and AC vs. V2–V4).

Software Versus Reader: Ischemic Measurements

We also compared the performance in detection of ≥70% stenosis on a per-patient basis using automated (ITPD) and visual (SDS) ischemic measurements with results generally similar to the stress measurements. The automated diagnostic accuracy was 82% (NC) and 83% (AC) ITPD. The ITPD-NC sensitivity was superior to visual scoring (87% vs. 78% for reader 1 and 63% for reader 2, P < 0.001), and the sensitivity of AC-ITPD (87%) was superior to that of reader 2 (74%–78%) for all steps (V2–V4) (P < 0.001). The specificities of both readers were superior for NC (86% and 91% vs. 80%, P < 0.001), and the specificity of reader 2 (V2–V4) was superior for AC data (P < 0.001). The accuracies of NC-ITPD and reader 1 were similar (82% vs. 83%, P = 0.56), but the accuracy of NC-TPD was superior to that of reader 2 (82% vs. 78%, P = 0.03). The accuracy of AC-ITPD (83%) was similar to both readers for steps V2–V4 (83%–86%, P > 0.06). The ROC AUCs for NC (0.90) and AC-ITPD (0.91) were better than for the V1 read (0.80–0.84, both readers) and V2–V4 reads (0.84–0.89, both readers) (P < 0.02), respectively.

Software Analysis: Impact of Manual Contour Adjustment

LV contours were manually adjusted for shape (localize or mask option) or valve plane (constrain option) in 11% of NC cases and 29% of AC cases (valve plane only in 58% for NC and 48% for AC of these adjustments) by an experienced Cedars-Sinai technologist. The shape or valve plane was manually adjusted in 21% of cases for NC and 34% of cases for AC data by a less experienced technologist (valve plane only in 84% for NC and 71% for AC of these adjustments) (P < 0.05 for both vs. experienced technologist).

The comparison of the diagnostic performance for the unsupervised, less experienced technologist and experienced technologist is shown in Table 4. Overall, sensitivities, specificities, and accuracies for NC and AC were similar for all 3 runs. There was a trend toward improved AC-TPD specificity for the more experienced technologist (P = 0.059). Nevertheless, the ROC AUC for NC and AC analysis was significantly lower for the inexperienced technologist than for the experienced technologist. We also compared these results to our expert visual readings (stages 1 and 2). There were no significant differences in overall diagnostic accuracy between the expert readers and these 3 types of automated analysis. However, the NC and AC ROC AUC for both readers was significantly lower than the ROC AUC for all 3 categories (P < 0.05).

View this table:
  • View inline
  • View popup
TABLE 4

Comparison of Diagnostic Performance

DISCUSSION

The visual analysis of MPS, currently the recommended standard clinical practice, is dependent on a subjective interpretation of the data and prone to possible bias related to reader experience, which is also a major shortcoming of other noninvasive stress tests such as stress echocardiography. In this study, the overall diagnostic accuracy of the fully automated computer analysis using NC and AC MPS was at least equivalent to the expert visual reads on a per-patient basis. Furthermore, on a per-vessel basis, the automated analysis for detection of ≥70% stenosis based on angiographic criteria was at all times at least comparable and in some cases superior to the visual analysis. To our knowledge, this is the first study comprehensively comparing automated analysis to semiquantitative visual analysis by evaluating the incremental diagnostic value of supplementing NC data by AC images, computer analysis, and clinical information on a per-patient and per-vessel basis.

The reading experts in our study were attending physicians from premier imaging centers with extensive experience in MPS interpretation. It is therefore likely that a fully automatic analysis could play an integral role as a guide for the less experienced reader who may be less certain about normal variation in uptake (7). Prior studies have demonstrated that less experienced readers have more variability than experienced readers (20). Our study demonstrated that automated analysis was at least comparable to visual reads on per-vessel territories and at times outperformed visual reads on the basis of diagnostic accuracy and ROC analysis for detection of ≥70% stenosis based on angiographic criteria. Therefore, although the American Society of Nuclear Cardiology currently recommends supplementation of quantitative analysis with visual analysis for MPS, it might be feasible that in the future visual analysis might be used to override the quantitative analysis in only a minority of cases.

We also assessed the role of the nuclear technologist in contour verification during automated analysis. Surprisingly, there were no significant differences between sensitivity, specificity, and accuracy for fully unsupervised analysis, less experienced contour adjustments, and more experienced contour adjustments for NC and AC data. However, the ROC AUC for NC-TPD and AC-TPD for unsupervised analysis and analysis by a less experienced technologist were slightly lower than those generated using contours checked by an experienced technologist. Therefore, there could be some potential advantage of contour checking by an experienced technologist, but it is likely small. Nevertheless, the ROC AUC in all 3 cases was higher than the visual reads by both expert readers. Furthermore, the less experienced technologist was adjusting the contours more frequently, especially the valve plane, possibly because the quality control flag (especially valve flag) typically has low specificity and high sensitivity for detection of contour failures and can overindicate the need for adjustment.

Importantly, the visual analysis of stress NC and AC data included gated function information and stress and rest perfusion images, whereas the stress automated analysis was solely based on stress TPD and was therefore a truly stress-only perfusion analysis. Additionally, the comparison of the visual versus automated analysis using ischemic parameters did not reveal any significant differences in this patient population that had suspected CAD but no previous documented history of coronary disease. Therefore, the automatic performance results can be extrapolated to any stress-only studies, promising the reduction of patient imaging time and radiation dose (21).

We also evaluated the benefits of adding AC information to the NC information for both automated and visual analysis. On a per-patient and per-vessel basis, the addition of AC information to the NC data in our study resulted in an improved diagnosis of ≥70% stenosis based on accuracy and ROC evaluation for automated analysis, as is consistent with prior studies (10,22–24). Additionally, the overall agreement between the 2 readers, as well as between readers and TPD, improved with the addition of AC information. Therefore, our findings suggest that if AC is available it should routinely be used when interpreting MPS. It is possible that similar benefits could be obtained using prone–supine analysis instead of the use of AC data (25).

Prior studies have demonstrated that knowledge of clinical information may significantly change the interpretation of MPS studies (26). However, in our analysis the addition of clinical information did not routinely improve diagnosis of CAD from MPS, and the degree of agreement between the 2 readers did not significantly improve when they were provided with clinical information. However, the diagnostic accuracy for reader 2 did improve by 3% (P < 0.05) when clinical information was considered. Although these differences are small, the potential advantages of an experienced visual observer integrating clinical information cannot be easily dismissed.

The overall agreement between the 2 expert readers in our study was good (κ = 0.77–0.83). Although the overall accuracy was similar between the 2 readers, one expert had consistently higher sensitivity whereas the other had higher specificity, highlighting a common difficulty in providing a definite diagnosis from visual analysis of MPS. The data illustrate that different readers operate at different sensitivity and specificity thresholds, and this in part is the cause of the interobserver variability.

This study has several limitations. Visual coronary angiography interpretation was used as the gold standard for this study despite the known limitations of this method. Patients with low-likelihood data were included in the analysis and were considered to have normal angiographic data. However, prior studies have indicated that patients with normal angiographic data can often have an abnormal MPS data due to referral bias, which affects the overall diagnostic accuracy of MPS scans (27). Our comparisons are based solely on diagnostic performance for obstructive CAD; however, the visual analysis and software also provide information on the extent of the myocardium involved, which may have prognostic implications. Further prognostic studies are needed to clearly demonstrate the superiority of the automated method. Finally, the results were obtained on only one particular camera and attenuation-correction system; therefore, further multicenter evaluation will be required in the future to confirm these results.

CONCLUSION

Automated computer analysis using NC and AC MPS data with contours checked by an experienced technologist is at least equivalent to visual analysis in terms of detection of coronary angiographic findings of ≥70% stenoses, even when the reader is provided clinical and LV function information, and can outperform the experienced reader on a per-territory basis. Furthermore, attenuation correction improves the diagnostic accuracy of automated analysis and the interobserver agreement for visual analysis between readers.

DISCLOSURE

The costs of publication of this article were defrayed in part by the payment of page charges. Therefore, and solely to indicate this fact, this article is hereby marked “advertisement” in accordance with 18 USC section 1734. This research was supported in part by grants R01HL089765 and 5K23HL92299 from the National Heart, Lung, and Blood Institute/National Institutes of Health (NHLBI/NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NHLBI. Cedars-Sinai Medical Center receives royalties for the quantitative assessment of function, perfusion, and viability, a portion of which is distributed to some of the authors of this manuscript (Daniel S. Berman, Guido Germano, and Piotr Slomka). No other potential conflict of interest relevant to this article was reported.

Acknowledgments

We thank Caroline Kilian and Arpine Oganyan for proofreading the text.

Footnotes

  • Published online Jan. 11, 2013

  • © 2013 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

REFERENCES

  1. 1.↵
    1. Sharir T,
    2. Ben-Haim S,
    3. Merzon K,
    4. et al
    . High-speed myocardial perfusion imaging initial clinical comparison with conventional dual detector anger camera imaging. JACC Cardiovasc Imaging. 2008;1:156–163.
    OpenUrlCrossRefPubMed
  2. 2.↵
    1. Berman DS,
    2. Kang X,
    3. Van Train KF,
    4. et al
    . Comparative prognostic value of automatic quantitative analysis versus semiquantitative visual analysis of exercise myocardial perfusion single-photon emission computed tomography. J Am Coll Cardiol. 1998;32:1987–1995.
    OpenUrlCrossRefPubMed
  3. 3.
    1. Leslie WD,
    2. Tully SA,
    3. Yogendran MS,
    4. et al
    . Prognostic value of automated quantification of 99mTc-sestamibi myocardial perfusion imaging. J Nucl Med. 2005;46:204–211.
    OpenUrlAbstract/FREE Full Text
  4. 4.↵
    1. Tamaki N,
    2. Yonekura Y,
    3. Mukai T,
    4. et al
    . Stress thallium-201 transaxial emission computed tomography: quantitative versus qualitative analysis for evaluation of coronary artery disease. J Am Coll Cardiol. 1984;4:1213–1221.
    OpenUrlPubMed
  5. 5.↵
    1. Berman DS,
    2. Kang X,
    3. Gransar H,
    4. et al
    . Quantitative assessment of myocardial perfusion abnormality on SPECT myocardial perfusion imaging is more reproducible than expert visual analysis. J Nucl Cardiol. 2009;16:45–53.
    OpenUrlCrossRefPubMed
  6. 6.↵
    1. Iskandrian AE,
    2. Garcia EV,
    3. Faber T,
    4. et al
    . Automated assessment of serial SPECT myocardial perfusion images. J Nucl Cardiol. 2009;16:6–9.
    OpenUrlCrossRefPubMed
  7. 7.↵
    1. Holly TA,
    2. Abbott BG,
    3. Al-Mallah M,
    4. et al
    . Single photon-emission computed tomography. J Nucl Cardiol. 2010;17:941–973.
    OpenUrlCrossRefPubMed
  8. 8.↵
    1. Grossman GB,
    2. Garcia EV,
    3. Bateman TM,
    4. et al
    . Quantitative Tc-99m sestamibi attenuation-corrected SPECT: development and multicenter trial validation of myocardial perfusion stress gender-independent normal database in an obese population. J Nucl Cardiol. 2004;11:263–272.
    OpenUrlCrossRefPubMed
  9. 9.
    1. Thompson RC,
    2. Heller GV,
    3. Johnson LL,
    4. et al
    . Value of attenuation correction on ECG-gated SPECT myocardial perfusion imaging related to body mass index. J Nucl Cardiol. 2005;12:195–202.
    OpenUrlCrossRefPubMed
  10. 10.↵
    1. Xu Y,
    2. Fish M,
    3. Gerlach J,
    4. et al
    . Combined quantitative analysis of attenuation corrected and non-corrected myocardial perfusion SPECT: method development and clinical validation. J Nucl Cardiol. 2010;17:591–599.
    OpenUrlCrossRefPubMed
  11. 11.↵
    1. Diamond GA,
    2. Forrester JS
    . Analysis of probability as an aid in the clinical diagnosis of coronary-artery disease. N Engl J Med. 1979;300:1350–1358.
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Slomka PJ,
    2. Fish MB,
    3. Lorenzo S,
    4. et al
    . Simplified normal limits and automated quantitative assessment for attenuation-corrected myocardial perfusion SPECT. J Nucl Cardiol. 2006;13:642–651.
    OpenUrlCrossRefPubMed
  13. 13.↵
    1. Germano G,
    2. Berman DS
    . On the accuracy and reproducibility of quantitative gated myocardial perfusion SPECT. J Nucl Med. 1999;40:810–813.
    OpenUrlFREE Full Text
  14. 14.↵
    1. Xu Y,
    2. Kavanagh P,
    3. Fish M,
    4. et al
    . Automated quality control for segmentation of myocardial perfusion SPECT. J Nucl Med. 2009;50:1418–1426.
    OpenUrlAbstract/FREE Full Text
  15. 15.↵
    1. Slomka PJ,
    2. Nishina H,
    3. Berman DS,
    4. et al
    . Automated quantification of myocardial perfusion SPECT using simplified normal limits. J Nucl Cardiol. 2005;12:66–77.
    OpenUrlCrossRefPubMed
  16. 16.↵
    1. Germano G,
    2. Kavanagh PB,
    3. Slomka PJ,
    4. et al
    . Quantitation in gated perfusion SPECT imaging: the Cedars-Sinai approach. J Nucl Cardiol. 2007;14:433–454.
    OpenUrlCrossRefPubMed
  17. 17.↵
    1. Wolak A,
    2. Slomka PJ,
    3. Fish MB,
    4. et al
    . Quantitative myocardial-perfusion SPECT: comparison of three state-of-the-art software packages. J Nucl Cardiol. 2008;15:27–34.
    OpenUrlCrossRefPubMed
  18. 18.↵
    1. Shoukri MM
    . Measures of Interobserver Agreement and Reliability. 2d ed. Boca Raton, FL: CRC Press; 2010.
  19. 19.↵
    1. DeLong ER,
    2. DeLong DM,
    3. Clarke-Pearson DL
    . Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics. 1988;44:837–845.
    OpenUrlCrossRefPubMed
  20. 20.↵
    1. Golub RJ,
    2. Ahlberg AW,
    3. McClellan JR,
    4. et al
    . Interpretive reproducibility of stress Tc-99m sestamibi tomographic myocardial perfusion imaging. J Nucl Cardiol. 1999;6:257–269.
    OpenUrlCrossRefPubMed
  21. 21.↵
    1. Heller GV,
    2. Bateman TM,
    3. Johnson LL,
    4. et al
    . Clinical value of attenuation correction in stress-only Tc-99m sestamibi SPECT imaging. J Nucl Cardiol. 2004;11:273–281.
    OpenUrlCrossRefPubMed
  22. 22.↵
    1. Ficaro EP,
    2. Fessler JA,
    3. Shreve PD,
    4. et al
    . Simultaneous transmission/emission myocardial perfusion tomography: diagnostic accuracy of attenuation-corrected 99mTc-sestamibi single-photon emission computed tomography. Circulation. 1996;93:463–473.
    OpenUrlAbstract/FREE Full Text
  23. 23.
    1. Kluge R,
    2. Sattler B,
    3. Seese A,
    4. et al
    . Attenuation correction by simultaneous emission-transmission myocardial single-photon emission tomography using a technetium-99m-labelled radiotracer: impact on diagnostic accuracy. Eur J Nucl Med. 1997;24:1107–1114.
    OpenUrlPubMed
  24. 24.↵
    1. Shotwell M,
    2. Singh BM,
    3. Fortman C,
    4. et al
    . Improved coronary disease detection with quantitative attenuation-corrected Tl-201 images. J Nucl Cardiol. 2002;9:52–62.
    OpenUrlCrossRefPubMed
  25. 25.↵
    1. Nishina H,
    2. Slomka PJ,
    3. Abidov A,
    4. et al
    . Combined supine and prone quantitative myocardial perfusion SPECT: method development and clinical validation in patients with no known coronary artery disease. J Nucl Med. 2006;47:51–58.
    OpenUrlAbstract/FREE Full Text
  26. 26.↵
    1. Simons M,
    2. Parker JA,
    3. Donohoe KJ,
    4. et al
    . The impact of clinical data on interpretation of thallium scintigrams. J Nucl Cardiol. 1994;1:365–371.
    OpenUrlPubMed
  27. 27.↵
    1. Miller TD,
    2. Hodge DO,
    3. Christian TF,
    4. et al
    . Effects of adjustment for referral bias on the sensitivity and specificity of single photon emission computed tomography for the diagnosis of coronary artery disease. Am J Med. 2002;112:290–297.
    OpenUrlCrossRefPubMed
  • Received for publication May 18, 2012.
  • Accepted for publication September 17, 2012.
View Abstract
PreviousNext
Back to top

In this issue

Journal of Nuclear Medicine: 54 (2)
Journal of Nuclear Medicine
Vol. 54, Issue 2
February 1, 2013
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Nuclear Medicine.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Comparison of Fully Automated Computer Analysis and Visual Scoring for Detection of Coronary Artery Disease from Myocardial Perfusion SPECT in a Large Population
(Your Name) has sent you a message from Journal of Nuclear Medicine
(Your Name) thought you would like to see the Journal of Nuclear Medicine web site.
Citation Tools
Comparison of Fully Automated Computer Analysis and Visual Scoring for Detection of Coronary Artery Disease from Myocardial Perfusion SPECT in a Large Population
Reza Arsanjani, Yuan Xu, Sean W. Hayes, Mathews Fish, Mark Lemley, James Gerlach, Sharmila Dorbala, Daniel S. Berman, Guido Germano, Piotr Slomka
Journal of Nuclear Medicine Feb 2013, 54 (2) 221-228; DOI: 10.2967/jnumed.112.108969

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Share
Comparison of Fully Automated Computer Analysis and Visual Scoring for Detection of Coronary Artery Disease from Myocardial Perfusion SPECT in a Large Population
Reza Arsanjani, Yuan Xu, Sean W. Hayes, Mathews Fish, Mark Lemley, James Gerlach, Sharmila Dorbala, Daniel S. Berman, Guido Germano, Piotr Slomka
Journal of Nuclear Medicine Feb 2013, 54 (2) 221-228; DOI: 10.2967/jnumed.112.108969
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSION
    • DISCLOSURE
    • Acknowledgments
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • PDF

Related Articles

  • This Month in JNM
  • PubMed
  • Google Scholar

Cited By...

  • Impact of Early Revascularization on Major Adverse Cardiovascular Events in Relation to Automatically Quantified Ischemia
  • Improving Prognostic Accuracy of Myocardial Perfusion Imaging With Quantitative Assessment
  • Quantitative Clinical Nuclear Cardiology, Part 1: Established Applications
  • Deep Learning Analysis of Upright-Supine High-Efficiency SPECT Myocardial Perfusion Imaging for Prediction of Obstructive Coronary Artery Disease: A Multicenter Study
  • Artificial Intelligence in Cardiovascular Imaging: JACC State-of-the-Art Review
  • Deep Learning for Prediction of Obstructive Disease From Fast Myocardial Perfusion SPECT: A Multicenter Study
  • Sex Differences in Mental Stress-Induced Myocardial Ischemia in Patients With Coronary Heart Disease
  • Google Scholar

More in this TOC Section

  • Clinical Evaluation of Zero-Echo-Time Attenuation Correction for Brain 18F-FDG PET/MRI: Comparison with Atlas Attenuation Correction
  • Absolute Myocardial Blood Flow and Flow Reserve Assessed by Gated SPECT with Cadmium–Zinc–Telluride Detectors Using 99mTc-Tetrofosmin: Head-to-Head Comparison with 13N-Ammonia PET
  • Nuclear Imaging for Classic Fever of Unknown Origin: Meta-Analysis
Show more Clinical Investigations

Similar Articles

Keywords

  • automated quantification
  • coronary artery disease
  • myocardial perfusion SPECT
  • total perfusion deficit
SNMMI

© 2021 Journal of Nuclear Medicine

Powered by HighWire