Automated Code Fix Suggestions for Accessibility Issues in Mobile Apps
- URL: http://arxiv.org/abs/2408.03827v1
- Date: Wed, 7 Aug 2024 15:06:07 GMT
- Title: Automated Code Fix Suggestions for Accessibility Issues in Mobile Apps
- Authors: Forough Mehralian, Titus Barik, Jeff Nichols, Amanda Swearngin,
- Abstract summary: FixAlly is an automated tool designed to suggest source code fixes for accessibility issues detected by automated accessibility scanners.
Our empirical study demonstrates FixAlly's capability in suggesting fixes that resolve issues found by accessibility scanners.
- Score: 6.015259590468495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accessibility is crucial for inclusive app usability, yet developers often struggle to identify and fix app accessibility issues due to a lack of awareness, expertise, and inadequate tools. Current accessibility testing tools can identify accessibility issues but may not always provide guidance on how to address them. We introduce FixAlly, an automated tool designed to suggest source code fixes for accessibility issues detected by automated accessibility scanners. FixAlly employs a multi-agent LLM architecture to generate fix strategies, localize issues within the source code, and propose code modification suggestions to fix the accessibility issue. Our empirical study demonstrates FixAlly's capability in suggesting fixes that resolve issues found by accessibility scanners -- with an effectiveness of 77% in generating plausible fix suggestions -- and our survey of 12 iOS developers finds they would be willing to accept 69.4% of evaluated fix suggestions.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Exploring Automatic Cryptographic API Misuse Detection in the Era of LLMs [60.32717556756674]
This paper introduces a systematic evaluation framework to assess Large Language Models in detecting cryptographic misuses.
Our in-depth analysis of 11,940 LLM-generated reports highlights that the inherent instabilities in LLMs can lead to over half of the reports being false positives.
The optimized approach achieves a remarkable detection rate of nearly 90%, surpassing traditional methods and uncovering previously unknown misuses in established benchmarks.
arXiv Detail & Related papers (2024-07-23T15:31:26Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Learning Task Decomposition to Assist Humans in Competitive Programming [90.4846613669734]
We introduce a novel objective for learning task decomposition, termed value (AssistV)
We collect a dataset of human repair experiences on different decomposed solutions.
Under 177 hours of human study, our method enables non-experts to solve 33.3% more problems, speeds them up by 3.3x, and empowers them to match unassisted experts.
arXiv Detail & Related papers (2024-06-07T03:27:51Z) - Fixing Smart Contract Vulnerabilities: A Comparative Analysis of
Literature and Developer's Practices [6.09162202256218]
We refer to vulnerability fixing in the ways found in the literature as guidelines.
It is not clear to what extent developers adhere to these guidelines, nor whether there are other viable common solutions and what they are.
The goal of our research is to fill knowledge gaps related to developers' observance of existing guidelines and to propose new and viable solutions to security vulnerabilities.
arXiv Detail & Related papers (2024-03-12T09:55:54Z) - Towards Automated Accessibility Report Generation for Mobile Apps [14.908672785900832]
We propose a system to generate whole app accessibility reports.
It combines varied data collection methods (e.g., app crawling, manual recording) with an existing accessibility scanner.
arXiv Detail & Related papers (2023-09-29T19:05:11Z) - Automated and Context-Aware Repair of Color-Related Accessibility Issues
for Android Apps [28.880881834251227]
We propose Iris, an automated and context-aware repair method to fix color-related accessibility issues for apps.
By leveraging a novel context-aware technique, Iris resolves the optimal colors and a vital phase of attribute-to-repair localization.
Our experiments unveiled that Iris can achieve a 91.38% repair success rate with high effectiveness and efficiency.
arXiv Detail & Related papers (2023-08-17T15:03:11Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Enabling Automatic Repair of Source Code Vulnerabilities Using
Data-Driven Methods [0.4568777157687961]
We propose ways to improve code representations for vulnerability repair from three perspectives.
Data-driven models of automatic program repair use pairs of buggy and fixed code to learn transformations that fix errors in code.
The expected results of this work are improved code representations for automatic program repair and, specifically, fixing security vulnerabilities.
arXiv Detail & Related papers (2022-02-07T10:47:37Z) - Adversarial Patch Generation for Automated Program Repair [0.0]
NEVERMORE is a novel learning-based mechanism inspired by the adversarial nature of bugs and fixes.
NEVERMORE is built upon the Generative Adrial Networks architecture and trained on historical bug fixes to generate repairs that closely mimic human-produced fixes.
Our empirical evaluation on 500 real-world bugs demonstrates the effectiveness of NEVERMORE in bug-fixing, generating repairs that match human fixes for 21.2% of the examined bugs.
arXiv Detail & Related papers (2020-12-21T00:34:29Z) - On the Social and Technical Challenges of Web Search Autosuggestion
Moderation [118.47867428272878]
Autosuggestions are typically generated by machine learning (ML) systems trained on a corpus of search logs and document representations.
While current search engines have become increasingly proficient at suppressing such problematic suggestions, there are still persistent issues that remain.
We discuss several dimensions of problematic suggestions, difficult issues along the pipeline, and why our discussion applies to the increasing number of applications beyond web search.
arXiv Detail & Related papers (2020-07-09T19:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.