Robust learning-based parsing and annotation of medical radiographs

IEEE Trans Med Imaging. 2011 Feb;30(2):338-50. doi: 10.1109/TMI.2010.2077740. Epub 2010 Sep 27.

Abstract

In this paper, we propose a learning-based algorithm for automatic medical image annotation based on robust aggregation of learned local appearance cues, achieving high accuracy and robustness against severe diseases, imaging artifacts, occlusion, or missing data. The algorithm starts with a number of landmark detectors to collect local appearance cues throughout the image, which are subsequently verified by a group of learned sparse spatial configuration models. In most cases, a decision could already be made at this stage by simply aggregating the verified detections. For the remaining cases, an additional global appearance filtering step is employed to provide complementary information to make the final decision. This approach is evaluated on a large-scale chest radiograph view identification task, demonstrating a very high accuracy ( > 99.9%) for a posteroanterior/anteroposterior (PA-AP) and lateral view position identification task, compared with the recently reported large-scale result of only 98.2% (Luo, , 2006). Our approach also achieved the best accuracies for a three-class and a multiclass radiograph annotation task, when compared with other state of the art algorithms. Our algorithm was used to enhance advanced image visualization workflows by enabling content-sensitive hanging-protocols and auto-invocation of a computer aided detection algorithm for identified PA-AP chest images. Finally, we show that the same methodology could be utilized for several image parsing applications including anatomy/organ region of interest prediction and optimized image visualization.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Artifacts
  • Artificial Intelligence*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Radiography / methods*