Visually Impaired People Empowered by Deploying CNN-Based System on Low-Power Wearable Platforms
Keywords:
CNN, Smart Glasses, Visual Impairment, Object Detection, Assistive Technology, Deep learningAbstract
Visual impairment handicaps tens of millions of people globally, usually restricting their performance of routine activities independently. Recent developments in deep learning and computer vision have unveiled new promises for the development of smart assistive devices. This paper discusses the use of Convolutional Neural Networks (CNNs) in designing smart glasses to assist visually handicapped people. By a comparison of 15 new studies in this field, we compare and contrast different CNN-based methods for object detection, obstacle evasion, text reading, and navigation assistance. They show great promise for real-time scene interpretation and user interaction in wearable devices. Our results emphasize important design trends, challenges, and performance metrics for deploying CNNs on low-power wearable platforms. The findings of this work constitute a basis for developing functional smart glasses that are capable of offering real-time feedback and enhancing the mobility, safety, and independence of visually impaired individuals.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License