Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies
- URL: http://arxiv.org/abs/2206.08287v1
- Date: Thu, 16 Jun 2022 16:41:23 GMT
- Title: Alternative models: Critical examination of disability definitions in
the development of artificial intelligence technologies
- Authors: Denis Newman-Griffis, Jessica Sage Rauchberg, Rahaf Alharbi, Louise
Hickman, Harry Hochheiser
- Abstract summary: This article presents a framework for critically examining AI data analytics technologies through a disability lens.
We consider three conceptual models of disability: the medical model, the social model, and the relational model.
We show how AI technologies designed under each of these models differ so significantly as to be incompatible with and contradictory to one another.
- Score: 6.9884176767901005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Disabled people are subject to a wide variety of complex decision-making
processes in diverse areas such as healthcare, employment, and government
policy. These contexts, which are already often opaque to the people they
affect and lack adequate representation of disabled perspectives, are rapidly
adopting artificial intelligence (AI) technologies for data analytics to inform
decision making, creating an increased risk of harm due to inappropriate or
inequitable algorithms. This article presents a framework for critically
examining AI data analytics technologies through a disability lens and
investigates how the definition of disability chosen by the designers of an AI
technology affects its impact on disabled subjects of analysis. We consider
three conceptual models of disability: the medical model, the social model, and
the relational model; and show how AI technologies designed under each of these
models differ so significantly as to be incompatible with and contradictory to
one another. Through a discussion of common use cases for AI analytics in
healthcare and government disability benefits, we illustrate specific
considerations and decision points in the technology design process that affect
power dynamics and inclusion in these settings and help determine their
orientation towards marginalisation or support. The framework we present can
serve as a foundation for in-depth critical examination of AI technologies and
the development of a design praxis for disability-related AI analytics.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks [16.795332276080888]
We propose a fine-grained validation framework for explainable artificial intelligence systems.
We recognise their inherent modular structure: technical building blocks, user-facing explanatory artefacts and social communication protocols.
arXiv Detail & Related papers (2024-03-19T13:45:34Z) - Emotional Intelligence Through Artificial Intelligence : NLP and Deep Learning in the Analysis of Healthcare Texts [1.9374282535132377]
This manuscript presents a methodical examination of the utilization of Artificial Intelligence in the assessment of emotions in texts related to healthcare.
We scrutinize numerous research studies that employ AI to augment sentiment analysis, categorize emotions, and forecast patient outcomes.
There persist challenges, which encompass ensuring the ethical application of AI, safeguarding patient confidentiality, and addressing potential biases in algorithmic procedures.
arXiv Detail & Related papers (2024-03-14T15:58:13Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Who Goes First? Influences of Human-AI Workflow on Decision Making in
Clinical Imaging [24.911186503082465]
This study explores the effects of providing AI assistance at the start of a diagnostic session in radiology versus after the radiologist has made a provisional decision.
We found that participants who are asked to register provisional responses in advance of reviewing AI inferences are less likely to agree with the AI regardless of whether the advice is accurate and, in instances of disagreement with the AI, are less likely to seek the second opinion of a colleague.
arXiv Detail & Related papers (2022-05-19T16:59:25Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - On Heuristic Models, Assumptions, and Parameters [0.76146285961466]
We argue that the social effects of computing can depend just as much on obscure technical caveats, choices, and qualifiers.
We describe three classes of objects used to encode these choices and qualifiers: models, assumptions, and parameters.
We raise six reasons these objects may be hazardous to comprehensive analysis of computing and argue they deserve deliberate consideration as researchers explain scientific work.
arXiv Detail & Related papers (2022-01-19T04:32:11Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Intelligent interactive technologies for mental health and well-being [70.1586005070678]
The paper critically analyzes existing solutions with the outlooks for their future.
In particular, we:.
give an overview of the technology for mental health,.
critically analyze the technology against the proposed criteria, and.
provide the design outlooks for these technologies.
arXiv Detail & Related papers (2021-05-11T19:04:21Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.