A Survey of Accessible Explainable Artificial Intelligence Research
- URL: http://arxiv.org/abs/2407.17484v1
- Date: Tue, 2 Jul 2024 21:09:46 GMT
- Title: A Survey of Accessible Explainable Artificial Intelligence Research
- Authors: Chukwunonso Henry Nwokoye, Maria J. P. Peixoto, Akriti Pandey, Lauren Pardy, Mahadeo Sukhai, Peter R. Lewis,
- Abstract summary: This paper presents a systematic literature review of the research on the accessibility of Explainable Artificial Intelligence (XAI)
Our methodology includes searching several academic databases with search terms to capture intersections between XAI and accessibility.
We stress the importance of including the disability community in XAI development to promote digital inclusion and accessibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing integration of Artificial Intelligence (AI) into everyday life makes it essential to explain AI-based decision-making in a way that is understandable to all users, including those with disabilities. Accessible explanations are crucial as accessibility in technology promotes digital inclusion and allows everyone, regardless of their physical, sensory, or cognitive abilities, to use these technologies effectively. This paper presents a systematic literature review of the research on the accessibility of Explainable Artificial Intelligence (XAI), specifically considering persons with sight loss. Our methodology includes searching several academic databases with search terms to capture intersections between XAI and accessibility. The results of this survey highlight the lack of research on Accessible XAI (AXAI) and stress the importance of including the disability community in XAI development to promote digital inclusion and accessibility and remove barriers. Most XAI techniques rely on visual explanations, such as heatmaps or graphs, which are not accessible to persons who are blind or have low vision. Therefore, it is necessary to develop explanation methods through non-visual modalities, such as auditory and tactile feedback, visual modalities accessible to persons with low vision, and personalized solutions that meet the needs of individuals, including those with multiple disabilities. We further emphasize the importance of integrating universal design principles into AI development practices to ensure that AI technologies are usable by everyone.
Related papers
- Artificial intelligence techniques in inherited retinal diseases: A review [19.107474958408847]
Inherited retinal diseases (IRDs) are a diverse group of genetic disorders that lead to progressive vision loss and are a major cause of blindness in working-age adults.
Recent advancements in artificial intelligence (AI) offer promising solutions to these challenges.
This review consolidates existing studies, identifies gaps, and provides an overview of AI's potential in diagnosing and managing IRDs.
arXiv Detail & Related papers (2024-10-10T03:14:51Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Invisible Users: Uncovering End-Users' Requirements for Explainable AI
via Explanation Forms and Goals [19.268536451101912]
Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies.
Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques.
This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems.
arXiv Detail & Related papers (2023-02-10T19:35:57Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies [6.9884176767901005]
This article presents a framework for critically examining AI data analytics technologies through a disability lens.
We consider three conceptual models of disability: the medical model, the social model, and the relational model.
We show how AI technologies designed under each of these models differ so significantly as to be incompatible with and contradictory to one another.
arXiv Detail & Related papers (2022-06-16T16:41:23Z) - Human-Centered Explainable AI (XAI): From Algorithms to User Experiences [29.10123472973571]
explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
arXiv Detail & Related papers (2021-10-20T21:33:46Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.