AEPH
Home > Conferences > Vol. 4. SDIT2024 >
DeepGI: An Automated Approach for Gastrointestinal Tract Segmentation in MRI Scans
DOI: https://doi.org/10.62381/ACS.SDIT2024.63
Author(s)
Ye Zhang1,*, Yulu Gong2, Dongji Cui3, Xinrui Li4, Xinyu Shen5
Affiliation(s)
1University of Pittsburgh, Pittsburgh, USA 2Northern Arizona University, Flagstaff, USA 3Trine University, Phoenix, USA 4Cornell University, New York, USA 5Columbia University, Frisco, USA *Corresponding author
Abstract
Detecting gastrointestinal (GI) tract cancers accurately remains essential for improved radiotherapy outcomes. This study introduces an innovative deep learning model for automated segmentation of GI regions within MRI scans, featuring an architecture that combines Inception-V4 for classification, a UNet++ with VGG19 encoder for 2.5D segmentation, and an Edge UNet optimized for grayscale images. Detailed data preprocessing, including 2.5D data handling, is employed to enhance segmentation precision. Our model addresses the limitations of manual segmentation by providing a streamlined, high-accuracy solution that captures complex GI structures crucial for treatment planning.
Keywords
Deep Learning, MRI Segmentation, Inception-V4, UNet++, Edge UNet, Gastrointestinal Imaging
References
[1]Kocak, B., Yardimci, A. H., Yuzkan, S., Keles, A., Altun, O., Bulut, E., Bayrak, O. N., and Okumus, A. A., “Transparency in artificial intelligence research: a systematic review of availability items related to open science in radiology and nuclear medicine,” Academic Radiology 30(10), 2254–2266 (2023). [2]Zhou, Z., Qian, X., Hu, J., Chen, G., Zhang, C., Zhu, J., and Dai, Y., “An artificial intelligence-assisted diagnosis modeling software (aims) platform based on medical images and machine learning: a development and validation study,” Quantitative Imaging in Medicine and Surgery 13(11), 7504 (2023). [3]Lu, J., Jiyu, W., Lu, N., and Wente, Z., “Decoupled modeling methods and systems,” (Nov. 29 2022). US Patent 11,514,537. [4]Chen, L., Li, J., Zou, Y., and Wang, T., “Etu-net: edge enhancement-guided u-net with transformer for skin lesion segmentation,” Physics in Medicine & Biology 69(1), 015001 (2023). [5]Tianbo, S., Weijun, H., Jiangfeng, C., Weijia, L., Quan, Y., and Kun, H., “Bio-inspired swarm intelligence: a flocking project with group object recognition,” in [2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)], 834–837, IEEE (2023). [6]Zhang, Z., Tian, R., and Ding, Z., “Trep: Transformer-based evidential prediction for pedestrian intention with uncertainty,” in [Proceedings of the AAAI Conference on Artificial Intelligence ], 37 (2023). [7]Chen, K., Zhuang, D., and Chang, J. M., “Supercon: Supervised contrastive learning for imbalanced skin lesion classification,” arXiv preprint arXiv:2202.05685 (2022). [8]Maccioni, F., Busato, L., Valenti, A., Cardaccio, S., Longhi, A., and Catalano, C., “Magnetic resonance imaging of the gastrointestinal tract: Current role, recent advancements and future prospectives,” Diagnos- tics 13(14), 2410 (2023). [9]Ronneberger, O., Fischer, P., and Brox, T., “U-net: Convolutional networks for biomedical image segmenta- tion,” in [Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18 ], 234–241, Springer (2015). [10]Simonyan, K. and Zisserman, A., “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556 (2014). [11]Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z., “Rethinking the inception architecture for computer vision,” in [Proceedings of the IEEE conference on computer vision and pattern recognition ], 2818–2826 (2016). [12]Badrinarayanan, V., Kendall, A., and Cipolla, R., “Segnet: A deep convolutional encoder-decoder archi- tecture for image segmentation,” IEEE transactions on pattern analysis and machine intelligence 39(12), 2481–2495 (2017). [13]Zhang, Y., Wang, X., Gao, L., and Liu, Z., “Manipulator control system based on machine vision,” in [International Conference on Applications and Techniques in Cyber Intelligence ATCI 2019: Applications and Techniques in Cyber Intelligence 7 ], 906–916, Springer (2020). [14]Zhang, Q., Cai, G., Cai, M., Qian, J., and Song, T., “Deep learning model aids breast cancer detection,” Frontiers in Computing and Intelligent Systems 6(1), 99–102 (2023). [15]Liao, J., Kot, A., Guha, T., and Sanchez, V., “Attention selective network for face synthesis and pose- invariant face recognition,” in [2020 IEEE International Conference on Image Processing (ICIP)], 748–752, IEEE (2020). [16]Lin, J., Deep-learning Enabled Accurate Bruch’s Membrane Segmentation in Ultrahigh-Resolution Spectral Domain and Ultrahigh-Speed Swept Source Optical Coherence Tomography, PhD thesis, Massachusetts In- stitute of Technology (2022). [17]Xiao, T., Zeng, L., Shi, X., Zhu, X., and Wu, G., “Dual-graph learning convolutional networks for in- terpretable alzheimer’s disease diagnosis,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 406–415, Springer (2022). [18]Hu, J., Wang, X., Liao, Z., and Xiao, T., “M-gcn: Multi-scale graph convolutional network for 3d point cloud classification,” in [2023 IEEE International Conference on Multimedia and Expo (ICME)], 924–929, IEEE (2023). [19]Zeng, L., Li, H., Xiao, T., Shen, F., and Zhong, Z., “Graph convolutional network with sample and feature weights for alzheimer’s disease diagnosis,” Information Processing & Management 59(4), 102952 (2022). [20]Chen, S., Potsaid, B., Li, Y., Lin, J., Hwang, Y., Moult, E. M., Zhang, J., Huang, D., and Fujimoto, J. G., “High speed, long range, deep penetration swept source oct for structural and angiographic imaging of the anterior eye,” Scientific reports 12(1), 992 (2022). [21]Chen, S., Abu-Qamar, O., Kar, D., Messinger, J. D., Hwang, Y., Moult, E. M., Lin, J., Baumal, C. R., Witkin, A., Liang, M. C., et al., “Ultrahigh resolution oct markers of normal aging and early age-related macular degeneration,” Ophthalmology Science 3(3), 100277 (2023). [22]Quintana, C. L., Chen, S., Lin, J., Fujimoto, J. G., Li, Y., and Huang, D., “Anterior topographic limbal demarcation with ultrawide-field oct,” Investigative Ophthalmology & Visual Science 63(7), 1195–A0195 (2022). [23]Wang, L. and Xia, W., “Power-type derivatives for rough volatility with jumps,” Journal of Futures Mar- kets 42(7), 1369–1406 (2022).
Copyright @ 2020-2035 Academic Education Publishing House All Rights Reserved