Reporting Risks in AI-based Assistive Technology Research: A Systematic Review
- URL: http://arxiv.org/abs/2407.12035v2
- Date: Thu, 18 Jul 2024 19:28:33 GMT
- Title: Reporting Risks in AI-based Assistive Technology Research: A Systematic Review
- Authors: Zahra Ahmadi, Peter R. Lewis, Mahadeo A. Sukhai,
- Abstract summary: We conducted a systematic literature review of research into AI-based assistive technology for persons with visual impairments.
Our study shows that most proposed technologies with a testable prototype have not been evaluated in a human study with members of the sight-loss community.
- Score: 2.928964540437144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is increasingly employed to enhance assistive technologies, yet it can fail in various ways. We conducted a systematic literature review of research into AI-based assistive technology for persons with visual impairments. Our study shows that most proposed technologies with a testable prototype have not been evaluated in a human study with members of the sight-loss community. Furthermore, many studies did not consider or report failure cases or possible risks. These findings highlight the importance of inclusive system evaluations and the necessity of standardizing methods for presenting and analyzing failure cases and threats when developing AI-based assistive technologies.
Related papers
- Evaluating AI Evaluation: Perils and Prospects [8.086002368038658]
This paper contends that the prevalent evaluation methods for these systems are fundamentally inadequate.
I argue that a reformation is required in the way we evaluate AI systems and that we should look towards cognitive sciences for inspiration.
arXiv Detail & Related papers (2024-07-12T12:37:13Z) - A Survey on Failure Analysis and Fault Injection in AI Systems [28.30817443151044]
The complexity of AI systems has exposed their vulnerabilities, necessitating robust methods for failure analysis (FA) and fault injection (FI) to ensure resilience and reliability.
This study fills this gap by presenting a detailed survey of existing FA and FI approaches across six layers of AI systems.
Our findings reveal a taxonomy of AI system failures, assess the capabilities of existing FI tools, and highlight discrepancies between real-world and simulated failures.
arXiv Detail & Related papers (2024-06-28T00:32:03Z) - Application of Artificial Intelligence in Schizophrenia Rehabilitation Management: Systematic Literature Review [4.619934969700147]
This review aims to assess the current status and prospects of artificial intelligence (AI) in the rehabilitation management of patients with schizophrenia.
We selected 70 studies from 2012 to the present, focusing on application, technology categories, products, and data types in mental health interventions and management.
The results indicate that AI can be widely used in symptom monitoring, relapse risk prediction, and rehabilitation treatment by analyzing ecological momentary assessment, behavioral, and speech data.
arXiv Detail & Related papers (2024-05-17T16:20:34Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Asset-centric Threat Modeling for AI-based Systems [7.696807063718328]
This paper presents ThreatFinderAI, an approach and tool to model AI-related assets, threats, countermeasures, and quantify residual risks.
To evaluate the practicality of the approach, participants were tasked to recreate a threat model developed by cybersecurity experts of an AI-based healthcare platform.
Overall, the solution's usability was well-perceived and effectively supports threat identification and risk discussion.
arXiv Detail & Related papers (2024-03-11T08:40:01Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.