Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies
- URL: http://arxiv.org/abs/2206.08287v1
- Date: Thu, 16 Jun 2022 16:41:23 GMT
- Title: Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies
- Authors: Denis Newman-Griffis, Jessica Sage Rauchberg, Rahaf Alharbi, Louise
Hickman, Harry Hochheiser
- Abstract summary: This article presents a framework for critically examining AI data analytics technologies through a disability lens.
We consider three conceptual models of disability: the medical model, the social model, and the relational model.
We show how AI technologies designed under each of these models differ so significantly as to be incompatible with and contradictory to one another.
- Score: 6.9884176767901005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Disabled people are subject to a wide variety of complex decision-making
processes in diverse areas such as healthcare, employment, and government
policy. These contexts, which are already often opaque to the people they
affect and lack adequate representation of disabled perspectives, are rapidly
adopting artificial intelligence (AI) technologies for data analytics to inform
decision making, creating an increased risk of harm due to inappropriate or
inequitable algorithms. This article presents a framework for critically
examining AI data analytics technologies through a disability lens and
investigates how the definition of disability chosen by the designers of an AI
technology affects its impact on disabled subjects of analysis. We consider
three conceptual models of disability: the medical model, the social model, and
the relational model; and show how AI technologies designed under each of these
models differ so significantly as to be incompatible with and contradictory to
one another. Through a discussion of common use cases for AI analytics in
healthcare and government disability benefits, we illustrate specific
considerations and decision points in the technology design process that affect
power dynamics and inclusion in these settings and help determine their
orientation towards marginalisation or support. The framework we present can
serve as a foundation for in-depth critical examination of AI technologies and
the development of a design praxis for disability-related AI analytics.
Related papers
- Revisiting Technical Bias Mitigation Strategies [0.11510009152620666]
Efforts to mitigate bias and enhance fairness in the artificial intelligence (AI) community have predominantly focused on technical solutions.
While numerous reviews have addressed bias in AI, this review uniquely focuses on the practical limitations of technical solutions in healthcare settings.
We illustrate each limitation with empirical studies focusing on healthcare and biomedical applications.
arXiv Detail & Related papers (2024-10-22T21:17:19Z) - Artificial intelligence techniques in inherited retinal diseases: A review [19.107474958408847]
Inherited retinal diseases (IRDs) are a diverse group of genetic disorders that lead to progressive vision loss and are a major cause of blindness in working-age adults.
Recent advancements in artificial intelligence (AI) offer promising solutions to these challenges.
This review consolidates existing studies, identifies gaps, and provides an overview of AI's potential in diagnosing and managing IRDs.
arXiv Detail & Related papers (2024-10-10T03:14:51Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - A Survey of Accessible Explainable Artificial Intelligence Research [0.0]
This paper presents a systematic literature review of the research on the accessibility of Explainable Artificial Intelligence (XAI)
Our methodology includes searching several academic databases with search terms to capture intersections between XAI and accessibility.
We stress the importance of including the disability community in XAI development to promote digital inclusion and accessibility.
arXiv Detail & Related papers (2024-07-02T21:09:46Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - On Heuristic Models, Assumptions, and Parameters [0.76146285961466]
We argue that the social effects of computing can depend just as much on obscure technical caveats, choices, and qualifiers.
We describe three classes of objects used to encode these choices and qualifiers: models, assumptions, and parameters.
We raise six reasons these objects may be hazardous to comprehensive analysis of computing and argue they deserve deliberate consideration as researchers explain scientific work.
arXiv Detail & Related papers (2022-01-19T04:32:11Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Intelligent interactive technologies for mental health and well-being [70.1586005070678]
The paper critically analyzes existing solutions with the outlooks for their future.
In particular, we:.
give an overview of the technology for mental health,.
critically analyze the technology against the proposed criteria, and.
provide the design outlooks for these technologies.
arXiv Detail & Related papers (2021-05-11T19:04:21Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.