SmartEye: A Machine Learning Approach to Enhance Mobility for the Visually Impaired through Depth Estimation and Object Detection
DOI:
https://doi.org/10.47611/jsrhs.v14i1.8507Keywords:
Depth Estimation, Object DetectionAbstract
Visually impaired individuals encounter significant challenges in their daily lives, particularly when navigating streets and public spaces. Walking in urban environments poses unique difficulties, as they must contend with obstacles, uneven surfaces, and traffic, all of which can create hazardous situations. With the growing prevalence of visual impairment, it is increasingly important to develop effective methods and technologies that can assist these individuals in safely and confidently navigating their surroundings. To address this issue, we propose SmartEye, a machine learning-based mobility assistant system that utilizes depth estimation and object detection. The system features a compact camera module mounted on the user’s glasses, which captures the environment in front of them. Through object detection and depth estimation algorithms, SmartEye analyzes the surroundings in real time, identifying obstacles and their distances. The outputs from both the object detection and depth estimation processes are then integrated to provide a comprehensive understanding of the user’s environment. This information is communicated to the individual through a speaker attached to the glasses, offering essential guidance and enhancing their mobility and safety while navigating public spaces. The proposed system achieved an absolute relative error of 0.068 and a mean average precision of 57.5 on a public dataset. Additionally, we conducted a real-world study by applying the SmartEye system to real-world street scenarios. The results demonstrated the system’s feasibility and effectiveness in assisting visually impaired individuals in navigating complex environments.
Downloads
References or Bibliography
BBC. (2015, Nov 18). “How dangerous are white canes?”: BBC
https://www.bbc.com/news/disability-34855311
Bray, E. E., Sammel, M. D., Cheney, D. L., Serpell, J. A., & Seyfarth, R. M. (2017). Effects of maternal investment, temperament, and cognition on guide dog success. Proceedings of the National Academy of Sciences, 114(34), 9128-9133.
CLOVERNOOK. (2020, Sep 18). “GUIDE DOGS VS. WHITE CANES: THE COMPREHENSIVE COMPARISON”: CLOVERNOOK
https://clovernook.org/2020/09/18/guide-dogs-vs-white-canes-the-comprehensive-comparison/
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). https://doi.org/10.48550/arXiv.1512.03385
Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., ... & Jain, M. (2022). ultralytics/yolov5: v7. 0-yolov5 sota realtime instance segmentation. Zenodo.
Kim, S. Y., & Cho, K. (2013). Usability and design guidelines of smart canes for users with visual impairments. international Journal of Design, 7(1).
MathWorks (2024, Apr 17). “Why Object Detection Matters”: MathWorks
https://www.mathworks.com/discovery/object-detection.html
Newen, Alice. (2023, Oct 29). “Guide Dog Statistics Australia”: Nalzo
https://nalzo.com.au/blogs/tips/guide-dog-statistics-australia
Park, Minseo, (2023, Sep 7). “Monocular depth estimation & Stereo disparity estimation”: Jolabokaflod
https://velog.io/@jolabokaflod/Monocular-depth-estimation-Stereo-disparity-estimation
Pesudovs, K., Lansingh, V. C., Kempen, J. H., Tapply, I., Fernandes, A. G., Cicinelli, M. V., ... & Bourne, R. (2024). Global estimates on the number of people blind or visually impaired by cataract: a meta-analysis from 2000 to 2020. Eye, 1-15.
Pyun, R., Kim, Y., Wespe, P., Gassert, R., & Schneller, S. (2013, June). Advanced augmented white cane with obstacle height and distance feedback. In 2013 IEEE 13th international conference on rehabilitation robotics (ICORR) (pp. 1-6). IEEE.
Tomkins, L. M., Thomson, P. C., & McGreevy, P. D. (2011). Behavioral and physiological predictors of guide dog success. Journal of veterinary behavior, 6(3), 178-187.
Published
How to Cite
Issue
Section
Copyright (c) 2025 Daniel Chung, Ji Tae Kim; Joyce Pereira

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.


