MotorEase: Automated Detection of Motor Impairment Accessibility Issues in Mobile App UIs
- URL: http://arxiv.org/abs/2403.13690v1
- Date: Wed, 20 Mar 2024 15:53:07 GMT
- Title: MotorEase: Automated Detection of Motor Impairment Accessibility Issues in Mobile App UIs
- Authors: Arun Krishnavajjala, SM Hasan Mansur, Justin Jose, Kevin Moran,
- Abstract summary: MotorEase is capable of identifying accessibility issues in mobile app UIs that impact motor-impaired users.
It adapts computer vision and text processing techniques to enable a semantic understanding of app UI screens.
It is able to identify violations with an average accuracy of 90%, and a false positive rate of less than 9%.
- Score: 8.057618278428494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has begun to examine the potential of automatically finding and fixing accessibility issues that manifest in software. However, while recent work makes important progress, it has generally been skewed toward identifying issues that affect users with certain disabilities, such as those with visual or hearing impairments. However, there are other groups of users with different types of disabilities that also need software tooling support to improve their experience. As such, this paper aims to automatically identify accessibility issues that affect users with motor-impairments. To move toward this goal, this paper introduces a novel approach, called MotorEase, capable of identifying accessibility issues in mobile app UIs that impact motor-impaired users. Motor-impaired users often have limited ability to interact with touch-based devices, and instead may make use of a switch or other assistive mechanism -- hence UIs must be designed to support both limited touch gestures and the use of assistive devices. MotorEase adapts computer vision and text processing techniques to enable a semantic understanding of app UI screens, enabling the detection of violations related to four popular, previously unexplored UI design guidelines that support motor-impaired users, including: (i) visual touch target size, (ii) expanding sections, (iii) persisting elements, and (iv) adjacent icon visual distance. We evaluate MotorEase on a newly derived benchmark, called MotorCheck, that contains 555 manually annotated examples of violations to the above accessibility guidelines, across 1599 screens collected from 70 applications via a mobile app testing tool. Our experiments illustrate that MotorEase is able to identify violations with an average accuracy of ~90%, and a false positive rate of less than 9%, outperforming baseline techniques.
Related papers
- Improve accessibility for Low Vision and Blind people using Machine Learning and Computer Vision [0.0]
This project explores how machine learning and computer vision could be utilized to improve accessibility for people with visual impairments.
This project will concentrate on building a mobile application that helps blind people to orient in space by receiving audio and haptic feedback.
arXiv Detail & Related papers (2024-03-24T21:19:17Z) - Towards Automated Accessibility Report Generation for Mobile Apps [14.908672785900832]
We propose a system to generate whole app accessibility reports.
It combines varied data collection methods (e.g., app crawling, manual recording) with an existing accessibility scanner.
arXiv Detail & Related papers (2023-09-29T19:05:11Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - On the Forces of Driver Distraction: Explainable Predictions for the
Visual Demand of In-Vehicle Touchscreen Interactions [5.375634674639956]
In-vehicle touchscreen Human-Machine Interfaces (HMIs) must be as little distracting as possible.
This paper presents a machine learning method that predicts the visual demand of in-vehicle touchscreen interactions.
arXiv Detail & Related papers (2023-01-05T13:50:26Z) - In-Vehicle Interface Adaptation to Environment-Induced Cognitive
Workload [55.41644538483948]
In-vehicle human-machine interfaces (HMIs) have evolved throughout the years, providing more and more functions.
To tackle this problem, we propose using adaptive HMIs that change according to the mental workload of the driver.
arXiv Detail & Related papers (2022-10-20T13:42:25Z) - Learning energy-efficient driving behaviors by imitating experts [75.12960180185105]
This paper examines the role of imitation learning in bridging the gap between control strategies and realistic limitations in communication and sensing.
We show that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations.
arXiv Detail & Related papers (2022-06-28T17:08:31Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - Identification of Driver Phone Usage Violations via State-of-the-Art
Object Detection with Tracking [8.147652597876862]
We propose a custom-trained state-of-the-art object detector to work with roadside cameras to capture driver phone usage without the need for human intervention.
The proposed approach also addresses the issues caused by windscreen glare and introduces the steps required to remedy this.
arXiv Detail & Related papers (2021-09-05T16:37:03Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Gaze-contingent decoding of human navigation intention on an autonomous
wheelchair platform [6.646253877148766]
We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms.
We present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view.
Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it.
arXiv Detail & Related papers (2021-03-04T14:52:06Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.