Abstract
241652
Introduction: The rapid advances in the field of supervised Deep Learning (DL) methods, resulted in the advent of a variety of automatic tumor segmentation models in single and multi-modal images such as PET-CT. However, the development of such models usually requires a large number of high-quality labeled data to present all classes of pathological variations. Unsupervised Anomaly Detection (UAD) methods aim to learn the appearance of healthy anatomies and identify the pathological regions that do not fit the learned distribution. Despite the promising results for certain applications, conventional UAD models are trained with an incomplete distribution of healthy anatomies and cannot well-preserve the anatomical constraints in full resolution. In this study, we propose an inpainting-based UAD model to detect and replace the tumoral regions with healthy tissues while maintaining the anatomical details in full-resolution PET-CT images.
Methods: A robust encoder-decoder model based on the Gated Convolution (GConv) <w:sdt docpart="D51BD0A7C5864F96AB69DBC60E7522D7" id="183178419" sdttag="MENDELEY_CITATION_v3_eyJjaXRhdGlvbklEIjoiTUVOREVMRVlfQ0lUQVRJT05fZDZlM2QyZDEtN2RkMi00ZWU3LThkZGMtMmY4OWZjZjIxMmMwIiwicHJvcGVydGllcyI6eyJub3RlSW5kZXgiOjB9LCJpc0VkaXRlZCI6ZmFsc2UsIm1hbnVhbE92ZXJyaWRlIjp7ImlzTWFudWFsbHlPdmVycmlkZGVuIjpmYWxzZSwiY2l0ZXByb2NUZXh0IjoiWzFdIiwibWFudWFsT3ZlcnJpZGVUZXh0IjoiIn0sImNpdGF0aW9uSXRlbXMiOlt7ImlkIjoiN2IwOThhZGUtMjFjZS0zZmZmLThiN2UtM2MxZTgwMGNjOTcwIiwiaXRlbURhdGEiOnsidHlwZSI6ImFydGljbGUtam91cm5hbCIsImlkIjoiN2IwOThhZGUtMjFjZS0zZmZmLThiN2UtM2MxZTgwMGNjOTcwIiwidGl0bGUiOiJGcmVlLWZvcm0gaW1hZ2UgaW5wYWludGluZyB3aXRoIGdhdGVkIGNvbnZvbHV0aW9uIiwiYXV0aG9yIjpbeyJmYW1pbHkiOiJZdSIsImdpdmVuIjoiSmlhaHVpIiwicGFyc2UtbmFtZXMiOmZhbHNlLCJkcm9wcGluZy1wYXJ0aWNsZSI6IiIsIm5vbi1kcm9wcGluZy1wYXJ0aWNsZSI6IiJ9LHsiZmFtaWx5IjoiTGluIiwiZ2l2ZW4iOiJaaGUiLCJwYXJzZS1uYW1lcyI6ZmFsc2UsImRyb3BwaW5nLXBhcnRpY2xlIjoiIiwibm9uLWRyb3BwaW5nLXBhcnRpY2xlIjoiIn0seyJmYW1pbHkiOiJZYW5nIiwiZ2l2ZW4iOiJKaW1laSIsInBhcnNlLW5hbWVzIjpmYWxzZSwiZHJvcHBpbmctcGFydGljbGUiOiIiLCJub24tZHJvcHBpbmctcGFydGljbGUiOiIifSx7ImZhbWlseSI6IlNoZW4iLCJnaXZlbiI6IlhpYW9odWkiLCJwYXJzZS1uYW1lcyI6ZmFsc2UsImRyb3BwaW5nLXBhcnRpY2xlIjoiIiwibm9uLWRyb3BwaW5nLXBhcnRpY2xlIjoiIn0seyJmYW1pbHkiOiJMdSIsImdpdmVuIjoiWGluIiwicGFyc2UtbmFtZXMiOmZhbHNlLCJkcm9wcGluZy1wYXJ0aWNsZSI6IiIsIm5vbi1kcm9wcGluZy1wYXJ0aWNsZSI6IiJ9LHsiZmFtaWx5IjoiSHVhbmciLCJnaXZlbiI6IlRob21hcyIsInBhcnNlLW5hbWVzIjpmYWxzZSwiZHJvcHBpbmctcGFydGljbGUiOiIiLCJub24tZHJvcHBpbmctcGFydGljbGUiOiIifV0sImNvbnRhaW5lci10aXRsZSI6IlByb2NlZWRpbmdzIG9mIHRoZSBJRUVFIEludGVybmF0aW9uYWwgQ29uZmVyZW5jZSBvbiBDb21wdXRlciBWaXNpb24iLCJhY2Nlc3NlZCI6eyJkYXRlLXBhcnRzIjpbWzIwMjIsNCwxMF1dfSwiRE9JIjoiaHR0cHM6Ly9kb2kub3JnLzEwLjExMDkvSUNDVi4yMDE5LjAwNDU3IiwiSVNCTiI6Ijk3ODE3MjgxNDgwMzgiLCJJU1NOIjoiMTU1MDU0OTkiLCJpc3N1ZWQiOnsiZGF0ZS1wYXJ0cyI6W1syMDE5LDEwLDFdXX0sInBhZ2UiOiI0NDcwLTQ0NzkiLCJhYnN0cmFjdCI6IldlIHByZXNlbnQgYSBnZW5lcmF0aXZlIGltYWdlIGlucGFpbnRpbmcgc3lzdGVtIHRvIGNvbXBsZXRlIGltYWdlcyB3aXRoIGZyZWUtZm9ybSBtYXNrIGFuZCBndWlkYW5jZS4gVGhlIHN5c3RlbSBpcyBiYXNlZCBvbiBnYXRlZCBjb252b2x1dGlvbnMgbGVhcm5lZCBmcm9tIG1pbGxpb25zIG9mIGltYWdlcyB3aXRob3V0IGFkZGl0aW9uYWwgbGFiZWxsaW5nIGVmZm9ydHMuIFRoZSBwcm9wb3NlZCBnYXRlZCBjb252b2x1dGlvbiBzb2x2ZXMgdGhlIGlzc3VlIG9mIHZhbmlsbGEgY29udm9sdXRpb24gdGhhdCB0cmVhdHMgYWxsIGlucHV0IHBpeGVscyBhcyB2YWxpZCBvbmVzLCBnZW5lcmFsaXplcyBwYXJ0aWFsIGNvbnZvbHV0aW9uIGJ5IHByb3ZpZGluZyBhIGxlYXJuYWJsZSBkeW5hbWljIGZlYXR1cmUgc2VsZWN0aW9uIG1lY2hhbmlzbSBmb3IgZWFjaCBjaGFubmVsIGF0IGVhY2ggc3BhdGlhbCBsb2NhdGlvbiBhY3Jvc3MgYWxsIGxheWVycy4gTW9yZW92ZXIsIGFzIGZyZWUtZm9ybSBtYXNrcyBtYXkgYXBwZWFyIGFueXdoZXJlIGluIGltYWdlcyB3aXRoIGFueSBzaGFwZSwgZ2xvYmFsIGFuZCBsb2NhbCBHQU5zIGRlc2lnbmVkIGZvciBhIHNpbmdsZSByZWN0YW5ndWxhciBtYXNrIGFyZSBub3QgYXBwbGljYWJsZS4gVGh1cywgd2UgYWxzbyBwcmVzZW50IGEgcGF0Y2gtYmFzZWQgR0FOIGxvc3MsIG5hbWVkIFNOLVBhdGNoR0FOLCBieSBhcHBseWluZyBzcGVjdHJhbC1ub3JtYWxpemVkIGRpc2NyaW1pbmF0b3Igb24gZGVuc2UgaW1hZ2UgcGF0Y2hlcy4gU04tUGF0Y2hHQU4gaXMgc2ltcGxlIGluIGZvcm11bGF0aW9uLCBmYXN0IGFuZCBzdGFibGUgaW4gdHJhaW5pbmcuIFJlc3VsdHMgb24gYXV0b21hdGljIGltYWdlIGlucGFpbnRpbmcgYW5kIHVzZXItZ3VpZGVkIGV4dGVuc2lvbiBkZW1vbnN0cmF0ZSB0aGF0IG91ciBzeXN0ZW0gZ2VuZXJhdGVzIGhpZ2hlci1xdWFsaXR5IGFuZCBtb3JlIGZsZXhpYmxlIHJlc3VsdHMgdGhhbiBwcmV2aW91cyBtZXRob2RzLiBPdXIgc3lzdGVtIGhlbHBzIHVzZXIgcXVpY2tseSByZW1vdmUgZGlzdHJhY3Rpbmcgb2JqZWN0cywgbW9kaWZ5IGltYWdlIGxheW91dHMsIGNsZWFyIHdhdGVybWFya3MgYW5kIGVkaXQgZmFjZXMuIENvZGUsIGRlbW8gYW5kIG1vZGVscyBhcmUgYXZhaWxhYmxlIGF0OiBVcmx7aHR0cHM6Ly9naXRodWIuY29tL0ppYWh1aVl1L2dlbmVyYXRpdmUtaW5wYWludGluZ30uIiwicHVibGlzaGVyIjoiSW5zdGl0dXRlIG9mIEVsZWN0cmljYWwgYW5kIEVsZWN0cm9uaWNzIEVuZ2luZWVycyBJbmMuIiwidm9sdW1lIjoiMjAxOS1PY3RvYmVyIiwiY29udGFpbmVyLXRpdGxlLXNob3J0IjoiIn0sImlzVGVtcG9yYXJ5IjpmYWxzZX1dfQ==">[1]</w:sdt> operator was developed to inpaint images and reconstruct realistic-looking images from the corrupted ones. The model was trained with healthy images heavily corrupted with irregular random holes and enforced to fill the holes with anatomically meaningful fine-grained patterns. The model was optimized by a multi-term objective function including intensity, perceptual, style, total variations, and Laplacian losses. In the test phase, an auto-inpainting strategy was developed to automatically detect and remove the tumoral regions as anomalies. A set of subregions were determined to be inpainted through a sliding window approach. if the sliding window covers healthy regions, the inpainting network will replace the appearance of those healthy structures with already learned healthy structures; thus, the newly generated images remain intact. On the other hand, if the sliding window encounters tumoral regions, it substitutes the textures of the tumors with the appearance of learned healthy tissues. Therefore, for each original tumoral image, a synthetic tumor-free image can be generated. A set of post-processing steps were applied to the residual images between the original and reconstructed images in order to segment the tumors. Figure 1 shows a graphical illustration of the proposed pipeline. Three datasets including AutoPET challenge, HECKTOR challenge, and an internal dataset were examined for segmenting Lung Cancer (LC) and Head&Neck (HN) tumors.
Results: Table 1 shows a summary of the achieved results over the validation sets in terms of conventional segmentation metrics.
Table 1. Segmentation performance of the proposed model
For objective comparison, 8 conventional UAD models were examined on the same datasets. In addition, the powerful supervised nnU-Net <w:sdt docpart="726C5BC8CDBA49EFAAC4F8912FD5B62C" id="2003152344" sdttag="MENDELEY_CITATION_v3_eyJjaXRhdGlvbklEIjoiTUVOREVMRVlfQ0lUQVRJT05fNDZjOGJiYTktZjA0ZC00OTk3LWI4YWMtNjNmNTEzNDFjZGEzIiwicHJvcGVydGllcyI6eyJub3RlSW5kZXgiOjB9LCJpc0VkaXRlZCI6ZmFsc2UsIm1hbnVhbE92ZXJyaWRlIjp7ImlzTWFudWFsbHlPdmVycmlkZGVuIjpmYWxzZSwiY2l0ZXByb2NUZXh0IjoiWzJdIiwibWFudWFsT3ZlcnJpZGVUZXh0IjoiIn0sImNpdGF0aW9uSXRlbXMiOlt7ImlkIjoiZjczNGE5NjEtMjFjYy0zODdhLTlhZTUtMWY4N2UxMDVhNDk0IiwiaXRlbURhdGEiOnsidHlwZSI6ImFydGljbGUtam91cm5hbCIsImlkIjoiZjczNGE5NjEtMjFjYy0zODdhLTlhZTUtMWY4N2UxMDVhNDk0IiwidGl0bGUiOiJublUtTmV0OiBhIHNlbGYtY29uZmlndXJpbmcgbWV0aG9kIGZvciBkZWVwIGxlYXJuaW5nLWJhc2VkIGJpb21lZGljYWwgaW1hZ2Ugc2VnbWVudGF0aW9uIiwiYXV0aG9yIjpbeyJmYW1pbHkiOiJJc2Vuc2VlIiwiZ2l2ZW4iOiJGYWJpYW4iLCJwYXJzZS1uYW1lcyI6ZmFsc2UsImRyb3BwaW5nLXBhcnRpY2xlIjoiIiwibm9uLWRyb3BwaW5nLXBhcnRpY2xlIjoiIn0seyJmYW1pbHkiOiJKYWVnZXIiLCJnaXZlbiI6IlBhdWwgRi4iLCJwYXJzZS1uYW1lcyI6ZmFsc2UsImRyb3BwaW5nLXBhcnRpY2xlIjoiIiwibm9uLWRyb3BwaW5nLXBhcnRpY2xlIjoiIn0seyJmYW1pbHkiOiJLb2hsIiwiZ2l2ZW4iOiJTaW1vbiBBLkEuIiwicGFyc2UtbmFtZXMiOmZhbHNlLCJkcm9wcGluZy1wYXJ0aWNsZSI6IiIsIm5vbi1kcm9wcGluZy1wYXJ0aWNsZSI6IiJ9LHsiZmFtaWx5IjoiUGV0ZXJzZW4iLCJnaXZlbiI6IkplbnMiLCJwYXJzZS1uYW1lcyI6ZmFsc2UsImRyb3BwaW5nLXBhcnRpY2xlIjoiIiwibm9uLWRyb3BwaW5nLXBhcnRpY2xlIjoiIn0seyJmYW1pbHkiOiJNYWllci1IZWluIiwiZ2l2ZW4iOiJLbGF1cyBILiIsInBhcnNlLW5hbWVzIjpmYWxzZSwiZHJvcHBpbmctcGFydGljbGUiOiIiLCJub24tZHJvcHBpbmctcGFydGljbGUiOiIifV0sImNvbnRhaW5lci10aXRsZSI6Ik5hdHVyZSBNZXRob2RzIDIwMjAgMTg6MiIsImFjY2Vzc2VkIjp7ImRhdGUtcGFydHMiOltbMjAyMiw0LDI0XV19LCJET0kiOiJodHRwczovL2RvaS5vcmcvMTAuMTAzOC9zNDE1OTItMDIwLTAxMDA4LXoiLCJJU1NOIjoiMTU0OC03MTA1IiwiUE1JRCI6IjMzMjg4OTYxIiwiVVJMIjoiaHR0cHM6Ly93d3cubmF0dXJlLmNvbS9hcnRpY2xlcy9zNDE1OTItMDIwLTAxMDA4LXoiLCJpc3N1ZWQiOnsiZGF0ZS1wYXJ0cyI6W1syMDIwLDEyLDddXX0sInBhZ2UiOiIyMDMtMjExIiwiYWJzdHJhY3QiOiJCaW9tZWRpY2FsIGltYWdpbmcgaXMgYSBkcml2ZXIgb2Ygc2NpZW50aWZpYyBkaXNjb3ZlcnkgYW5kIGEgY29yZSBjb21wb25lbnQgb2YgbWVkaWNhbCBjYXJlIGFuZCBpcyBiZWluZyBzdGltdWxhdGVkIGJ5IHRoZSBmaWVsZCBvZiBkZWVwIGxlYXJuaW5nLiBXaGlsZSBzZW1hbnRpYyBzZWdtZW50YXRpb24gYWxnb3JpdGhtcyBlbmFibGUgaW1hZ2UgYW5hbHlzaXMgYW5kIHF1YW50aWZpY2F0aW9uIGluIG1hbnkgYXBwbGljYXRpb25zLCB0aGUgZGVzaWduIG9mIHJlc3BlY3RpdmUgc3BlY2lhbGl6ZWQgc29sdXRpb25zIGlzIG5vbi10cml2aWFsIGFuZCBoaWdobHkgZGVwZW5kZW50IG9uIGRhdGFzZXQgcHJvcGVydGllcyBhbmQgaGFyZHdhcmUgY29uZGl0aW9ucy4gV2UgZGV2ZWxvcGVkIG5uVS1OZXQsIGEgZGVlcCBsZWFybmluZy1iYXNlZCBzZWdtZW50YXRpb24gbWV0aG9kIHRoYXQgYXV0b21hdGljYWxseSBjb25maWd1cmVzIGl0c2VsZiwgaW5jbHVkaW5nIHByZXByb2Nlc3NpbmcsIG5ldHdvcmsgYXJjaGl0ZWN0dXJlLCB0cmFpbmluZyBhbmQgcG9zdC1wcm9jZXNzaW5nIGZvciBhbnkgbmV3IHRhc2suIFRoZSBrZXkgZGVzaWduIGNob2ljZXMgaW4gdGhpcyBwcm9jZXNzIGFyZSBtb2RlbGVkIGFzIGEgc2V0IG9mIGZpeGVkIHBhcmFtZXRlcnMsIGludGVyZGVwZW5kZW50IHJ1bGVzIGFuZCBlbXBpcmljYWwgZGVjaXNpb25zLiBXaXRob3V0IG1hbnVhbCBpbnRlcnZlbnRpb24sIG5uVS1OZXQgc3VycGFzc2VzIG1vc3QgZXhpc3RpbmcgYXBwcm9hY2hlcywgaW5jbHVkaW5nIGhpZ2hseSBzcGVjaWFsaXplZCBzb2x1dGlvbnMgb24gMjMgcHVibGljIGRhdGFzZXRzIHVzZWQgaW4gaW50ZXJuYXRpb25hbCBiaW9tZWRpY2FsIHNlZ21lbnRhdGlvbiBjb21wZXRpdGlvbnMuIFdlIG1ha2Ugbm5VLU5ldCBwdWJsaWNseSBhdmFpbGFibGUgYXMgYW4gb3V0LW9mLXRoZS1ib3ggdG9vbCwgcmVuZGVyaW5nIHN0YXRlLW9mLXRoZS1hcnQgc2VnbWVudGF0aW9uIGFjY2Vzc2libGUgdG8gYSBicm9hZCBhdWRpZW5jZSBieSByZXF1aXJpbmcgbmVpdGhlciBleHBlcnQga25vd2xlZGdlIG5vciBjb21wdXRpbmcgcmVzb3VyY2VzIGJleW9uZCBzdGFuZGFyZCBuZXR3b3JrIHRyYWluaW5nLiBublUtTmV0IGlzIGEgZGVlcCBsZWFybmluZy1iYXNlZCBpbWFnZSBzZWdtZW50YXRpb24gbWV0aG9kIHRoYXQgYXV0b21hdGljYWxseSBjb25maWd1cmVzIGl0c2VsZiBmb3IgZGl2ZXJzZSBiaW9sb2dpY2FsIGFuZCBtZWRpY2FsIGltYWdlIHNlZ21lbnRhdGlvbiB0YXNrcy4gbm5VLU5ldCBvZmZlcnMgc3RhdGUtb2YtdGhlLWFydCBwZXJmb3JtYW5jZSBhcyBhbiBvdXQtb2YtdGhlLWJveCB0b29sLiIsInB1Ymxpc2hlciI6Ik5hdHVyZSBQdWJsaXNoaW5nIEdyb3VwIiwiaXNzdWUiOiIyIiwidm9sdW1lIjoiMTgiLCJjb250YWluZXItdGl0bGUtc2hvcnQiOiIifSwiaXNUZW1wb3JhcnkiOmZhbHNlfV19">[2]</w:sdt> model was employed as well to determine the optimal achievable segmentation performance.
Table 2. Segmentation performance of the baseline models
Conclusions: The proposed model could significantly outperform a variety of SOTA UAD models and achieve comparable results to a robust supervised model.
References
[1] J. Yu, doi, https://doi.org/10.1109/ICCV.2019.00457.
[2] F. Isensee, doi, https://doi.org/10.1038/s41592-020-01008-z