Eye Care You: Voice Guidance Application Using Social Robot for Visually Impaired People
- URL: http://arxiv.org/abs/2511.15110v1
- Date: Wed, 19 Nov 2025 04:34:54 GMT
- Title: Eye Care You: Voice Guidance Application Using Social Robot for Visually Impaired People
- Authors: Ting-An Lin, Pei-Lin Tsai, Yi-An Chen, Feng-Yu Chen, Lyn Chao-ling Chen,
- Abstract summary: Photo record function allows visually impaired users to capture image immediately when they encounter danger situations.<n>Mood lift function accompanies visually impaired users by asking questions, playing music and reading articles.<n> Greeting guest function answers to the visitors for the inconvenient physical condition of visually impaired users.
- Score: 1.3048920509133806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the study, the device of social robot was designed for visually impaired users, and along with a mobile application for provide functions to assist their lives. Both physical and mental conditions of visually impaired users are considered, and the mobile application provides functions: photo record, mood lift, greeting guest and today highlight. The application was designed for visually impaired users, and uses voice control to provide a friendly interface. Photo record function allows visually impaired users to capture image immediately when they encounter danger situations. Mood lift function accompanies visually impaired users by asking questions, playing music and reading articles. Greeting guest function answers to the visitors for the inconvenient physical condition of visually impaired users. In addition, today highlight function read news including weather forecast, daily horoscopes and daily reminder for visually impaired users. Multiple tools were adopted for developing the mobile application, and a website was developed for caregivers to check statues of visually impaired users and for marketing of the application.
Related papers
- An Artificial Intelligence-based Assistant for the Visually Impaired [2.7825760447670955]
This paper describes an artificial intelligence-based assistant application, AIDEN, developed during 2023 and 2024.<n>Visually impaired individuals face challenges in identifying objects, reading text, and navigating unfamiliar environments.<n>This application leverages state-of-the-art machine learning algorithms to identify and describe objects, read text, and answer questions about the environment.
arXiv Detail & Related papers (2025-11-08T17:23:51Z) - Screencast-Based Analysis of User-Perceived GUI Responsiveness [53.53923672866705]
tool is a technique that measures GUI responsiveness directly from mobile screencasts.<n>It uses computer vision to detect user interactions and analyzes frame-level visual changes to compute two key metrics.<n>tool has been deployed in an industrial testing pipeline and analyzes thousands of screencasts daily.
arXiv Detail & Related papers (2025-08-02T12:13:50Z) - Casper: Inferring Diverse Intents for Assistive Teleoperation with Vision Language Models [50.19518681574399]
A central challenge in real-world assistive teleoperation is for the robot to infer a wide range of human intentions from user control inputs.<n>We introduce Casper, an assistive teleoperation system that leverages commonsense knowledge embedded in pre-trained visual language models.<n>We show that Casper improves task performance, reduces human cognitive load, and achieves higher user satisfaction than direct teleoperation and assistive teleoperation baselines.
arXiv Detail & Related papers (2025-06-17T17:06:43Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Real-Time Pill Identification for the Visually Impaired Using Deep Learning [31.747327310138314]
This paper explores the development and implementation of a deep learning-based mobile application designed to assist blind and visually impaired individuals in real-time pill identification.
The application aims to accurately recognize and differentiate between various pill types through real-time image processing on mobile devices.
arXiv Detail & Related papers (2024-05-08T03:18:46Z) - Improve accessibility for Low Vision and Blind people using Machine Learning and Computer Vision [0.0]
This project explores how machine learning and computer vision could be utilized to improve accessibility for people with visual impairments.
This project will concentrate on building a mobile application that helps blind people to orient in space by receiving audio and haptic feedback.
arXiv Detail & Related papers (2024-03-24T21:19:17Z) - MagicEye: An Intelligent Wearable Towards Independent Living of Visually
Impaired [0.17499351967216337]
Vision impairment can severely impair a person's ability to work, navigate, and retain independence.
We present MagicEye, a state-of-the-art intelligent wearable device designed to assist visually impaired individuals.
With a total of 35 classes, the neural network employed by MagicEye has been specifically designed to achieve high levels of efficiency and precision in object detection.
arXiv Detail & Related papers (2023-03-24T08:59:35Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.<n>We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.<n>Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - VisBuddy -- A Smart Wearable Assistant for the Visually Challenged [0.0]
VisBuddy is a voice-based assistant, where the user can give voice commands to perform specific tasks.
It uses the techniques of image captioning for describing the user's surroundings, optical character recognition (OCR) for reading the text in the user's view, object detection to search and find the objects in a room and web scraping to give the user the latest news.
arXiv Detail & Related papers (2021-08-17T17:15:23Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.