Disability data futures: Achievable imaginaries for AI and disability data justice
- URL: http://arxiv.org/abs/2411.03885v1
- Date: Wed, 06 Nov 2024 13:04:29 GMT
- Title: Disability data futures: Achievable imaginaries for AI and disability data justice
- Authors: Denis Newman-Griffis, Bonnielin Swenor, Rupa Valdez, Gillian Mason,
- Abstract summary: Data are the medium through which individuals' identities are filtered in contemporary states and systems.
The history of data and AI is often one of disability exclusion, oppression, and the reduction of disabled experience.
This chapter brings together four academics and disability advocates to describe achievable imaginaries for artificial intelligence and disability data justice.
- Score: 2.0549239024359762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data are the medium through which individuals' identities and experiences are filtered in contemporary states and systems, and AI is increasingly the layer mediating between people, data, and decisions. The history of data and AI is often one of disability exclusion, oppression, and the reduction of disabled experience; left unchallenged, the current proliferation of AI and data systems thus risks further automating ableism behind the veneer of algorithmic neutrality. However, exclusionary histories do not preclude inclusive futures, and disability-led visions can chart new paths for collective action to achieve futures founded in disability justice. This chapter brings together four academics and disability advocates working at the nexus of disability, data, and AI, to describe achievable imaginaries for artificial intelligence and disability data justice. Reflecting diverse contexts, disciplinary perspectives, and personal experiences, we draw out the shape, actors, and goals of imagined future systems where data and AI support movement towards disability justice.
Related papers
- Accessibility Considerations in the Development of an AI Action Plan [10.467658828071057]
We argue that there is a need for Accessibility to be represented in several important domains.
Data security and privacy and privacy risks including data collected by AI based accessibility technologies.
Disability-specific AI risks and biases including both direct bias (during AI use by the disabled person) and indirect bias (when AI is used by someone else on data relating to a disabled person)
arXiv Detail & Related papers (2025-03-14T21:57:23Z) - A Democratic Platform for Engaging with Disabled Community in Generative
AI Development [0.9065034043031664]
Growing impact and popularity of generative AI tools have prompted us to examine their relevance within the disabled community.
This workshop paper proposes a platform to involve the disabled community while building generative AI systems.
arXiv Detail & Related papers (2023-09-26T13:30:57Z) - Towards FATE in AI for Social Media and Healthcare: A Systematic Review [0.0]
This survey focuses on the concepts of fairness, accountability, transparency, and ethics (FATE) within the context of AI.
We found that statistical and intersectional fairness can support fairness in healthcare on social media platforms.
While solutions like simulation, data analytics, and automated systems are widely used, their effectiveness can vary.
arXiv Detail & Related papers (2023-06-05T17:25:42Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Data Representativeness in Accessibility Datasets: A Meta-Analysis [7.6597163467929805]
We review datasets sourced by people with disabilities and older adults.
We find that accessibility datasets represent diverse ages, but have gender and race representation gaps.
We hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.
arXiv Detail & Related papers (2022-07-16T23:32:19Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - How Could Equality and Data Protection Law Shape AI Fairness for People
with Disabilities? [14.694420183754332]
This article examines the concept of 'AI fairness' for people with disabilities from the perspective of data protection and equality law.
We argue that there is a need for a distinctive approach to AI fairness, due to the different ways in which discrimination and data protection law applies in respect of Disability.
arXiv Detail & Related papers (2021-07-12T19:41:01Z) - Representative & Fair Synthetic Data [68.8204255655161]
We present a framework to incorporate fairness constraints into the self-supervised learning process.
We generate a representative as well as fair version of the UCI Adult census data set.
We consider representative & fair synthetic data a promising future building block to teach algorithms not on historic worlds, but rather on the worlds that we strive to live in.
arXiv Detail & Related papers (2021-04-07T09:19:46Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Data, Power and Bias in Artificial Intelligence [5.124256074746721]
Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty.
Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society.
This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains.
arXiv Detail & Related papers (2020-07-28T16:17:40Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.