Rapid Screening of Attention-Deficit/Hyperactivity Disorder using Fundus Photography with Retinal Vessel and Optic Disc Morphology
DOI:
https://doi.org/10.47611/jsrhs.v14i1.8494Keywords:
Fundus, machine learning, attention-deficit hyperactivity disorder, convolutional neural networkAbstract
Attention-Deficit/Hyperactivity Disorder (ADHD) is a specific type of neurodegenerative disorder increasingly prevalent in younger age groups. At the present time, the number of children diagnosed with ADHD is growing at a faster rate, and this trend makes an effective diagnosis method and treatment plan necessary. Traditional screening methods of ADHD for children include analyzing electroencephalography for brain activities, the checklist method that notes any behaviors of patients that may resemble those with ADHD, and the red circle green box method, which classifies behaviors that may indicate ADHD as red circles and others as green boxes to evaluate patients. However, these methods of diagnosis tend to be subjective and therefore have limitations in accuracy. This paper presents an alternative means of predicting whether a child has ADHD or not, utilizing a machine learning algorithm and a convolutional neural network architecture. The proposed model provides an innovative approach of rapid and accurate screening of ADHD by segmenting specific features of fundus photography such as the retinal vessels and optic disc. Experiments on six different convolutional neural network architectures were conducted to retrieve the highest accuracy of 88.15% from the DenseNet-201 model. Two different morphological operations were carried out in an ablation study in order to demonstrate the features’ contribution to overall model performance. Thus, this model proves to be a viable biomarker that detects and assesses the severity of ADHD. With further development, it has high potential to serve as a valuable tool that is both accurate and widely accessible.
Downloads
References or Bibliography
AI Hub. (2024, Mar 19). “Fundus NDD Dataset”: AI Hub.
https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=221
Baldi, P. (2012, June). Autoencoders, unsupervised learning, and deep architectures. In Proceedings of ICML workshop on unsupervised and transfer learning (pp. 37-49). JMLR Workshop and Conference Proceedings.
CDC (2024, May 23). “Data and Statistics on ADHD”: CDC
https://www.cdc.gov/adhd/data/?CDC_AAref_Val=https://www.cdc.gov/ncbddd/adhd/data.html
Charters, Lynda. (2022, Dec 27). “ICYMI: Smartphone-based fundus camera provides option for glaucoma screening”
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). https://doi.org/10.48550/arXiv.1512.03385
Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708). https://doi.org/10.48550/arXiv.1608.06993
Kim, J. H., Hong, J., Choi, H., Kang, H. G., Yoon, S., Hwang, J. Y., ... & Cheon, K. A. (2023). Development of deep ensembles to screen for autism and symptom severity using retinal photographs. JAMA Network Open, 6(12), e2347692-e2347692.
Ostrowski, Stacey. (2023, Aug 1). “Fundus”: All About Vision”
https://www.allaboutvision.com/eye-care/eye-anatomy/fundus/
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556
Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., ... & Xiao, B. (2020). Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 43(10), 3349-3364. https://doi.org/10.48550/arXiv.1908.07919
ZAHMACIOĞLU, O., & Kilic, E. Z. (2017). Early diagnosis and treatment of ADHD are important for a secure transition to adolescence. Anatolian Journal of Psychiatry/Anadolu Psikiyatri Dergisi, 18(1).
Published
How to Cite
Issue
Section
Copyright (c) 2025 Jooha Lee; Sherrie Lah

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.


