Endoscopic Ultrasound Image Classification Using a Mobilenet-Resnet Distillation Model
Keywords:
Endoscopic ultrasound (EUS), Deep learning, Image classification, Knowledge distillation, Convolutional neural network (CNN), ResNet-50, Medical image analysis, Lightweight model, Image preprocessing, Data augmentationAbstract
Accurate detection of gastrointestinal (GI) diseases such as adenocarcinomas through Endoscopic Ultrasound (EUS) imaging is vital for early diagnosis and improved clinical outcomes. However, the inherent noise, low contrast, and complex textures in EUS images pose major challenges for reliable automated analysis. This study presents AUTOEUS, an enhanced lightweight deep learning framework capable of both anatomical region classification and adenocarcinoma detection from EUS images. The proposed pipeline incorporates a two-stage preprocessing process using median filtering for noise suppression and Y-channel histogram equalization for contrast enhancement, significantly improving image clarity. A teacher–student knowledge distillation architecture is implemented, where a ResNet-50–based teacher network guides a compact convolutional student model, ensuring high accuracy with reduced computational cost. Experimental evaluation performed in MATLAB using augmented image datastores and five-fold binary classification metrics demonstrates strong diagnostic performance, achieving accuracies of 90.70% (cecum), 95.81% (ileum), 80.00% (pylorus), 90.23% (rectum), and 90.23% (stomach). Corresponding F1-scores reached up to 97.06%, validating the model’s strong precision–recall balance across multiple adenocarcinoma types. The visualization results confirm the framework’s ability to correctly classify true positive disease cases with high confidence. Owing to its lightweight architecture and robustness, AUTOEUS demonstrates high potential for real-time EUS-based disease detection and clinical decision support, offering a scalable, low-cost, and reliable tool for gastrointestinal cancer diagnosis.
Downloads
References
C. Yan and G. Guohua, ‘‘Application of a direct image enhancement method in medical image classification,’’ Comput. Appl. Softw., vol. 6, pp. 26–32, Mar. 2007.
L. Bo, C. Peg, L. Wei, and Z. Dazhe, ‘‘Medical image classification based on multi-feature fusion in scale space,’’ Comput. Appl., vol. 33, no. 4, pp. 1108–1114, 2013.
H. Jinmei and L. Zuoyong, ‘‘Medical image segmentation based on improved mathematical morphology algorithm,’’ Comput. Simul., vol. 28, no. 5, pp. 299–302, 2011.
W. Li, L. Huijuan, Y. Minchao, and Y. Ke, ‘‘A cancer image detection method based on Faster-RCNN,’’ J. China Univ. Metrol., vol. 29, no. 2, pp. 136–141, 2018.
R. Bakalo, J. Goldberger, and R. Ben-Ari, ‘‘Weakly and semi supervised detection in medical imaging via deep dual branch net,’’ Neurocomputing, vol. 421, pp. 15–25, Jan. 2021.
F. An and J. Liu, ‘‘Medical image segmentation algorithm based on multilayer boundary perception-self attention deep learning model,’’ Multimedia Tools Appl., vol. 80, pp. 15017–15039, Feb. 2021.
S. Pradeep and P. Nirmaladevi, ‘‘A review on speckle noise reduction techniques in ultrasound medical images based on spatial domain, transform domain and CNN methods,’’ in Proc. IOP Conf., Mater. Sci. Eng., vol. 1055, no. 1, 2021, Art. no. 012116.
A. H. Masquelin, N. Cheney, C. M. Kinsey, and J. H. T. Bates, ‘‘Wavelet decomposition facilitates training on small datasets for medical image classification by deep learning,’’ Histochem. Cell Biol., vol. 155, no. 2, pp. 309–317, Feb. 2021.
S. Ren, K. He, R. Girshick, and J. Sun, ‘‘Faster R-CNN: Towards realtime object detection with region proposal networks,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
L. Huilan and Y. Hui, ‘‘Image classification method based on iterative training and ensemble learning,’’ Comput. Eng. Des., vol. 41, no. 5, pp. 1301–1307, 2020.
W. Hao, Z. Ye, S. Honghai, and Z. Jingzhong, ‘‘Overview of image enhancement algorithms,’’ Chin. Opt., vol. 10, no. 4, pp. 438–448, 2017.
W. Teng, B. Leping, Y. Zhonglin, L. Qifeng, and Y. Xuanfang, ‘‘A flame recognition method based on image similarity of consecutive frames,’’ J. Naval Univ. Eng., vol. 29, no. 4, pp. 48–52, 2017.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 International Journal of Scientific Research in Science and Technology

This work is licensed under a Creative Commons Attribution 4.0 International License.
https://creativecommons.org/licenses/by/4.0