A Democratic Platform for Engaging with Disabled Community in Generative
AI Development
- URL: http://arxiv.org/abs/2309.14921v1
- Date: Tue, 26 Sep 2023 13:30:57 GMT
- Title: A Democratic Platform for Engaging with Disabled Community in Generative
AI Development
- Authors: Deepak Giri, Erin Brady
- Abstract summary: Growing impact and popularity of generative AI tools have prompted us to examine their relevance within the disabled community.
This workshop paper proposes a platform to involve the disabled community while building generative AI systems.
- Score: 0.9065034043031664
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) systems, especially generative AI technologies
are becoming more relevant in our society. Tools like ChatGPT are being used by
members of the disabled community e.g., Autistic people may use it to help
compose emails. The growing impact and popularity of generative AI tools have
prompted us to examine their relevance within the disabled community. The
design and development phases often neglect this marginalized group, leading to
inaccurate predictions and unfair discrimination directed towards them. This
could result from bias in data sets, algorithms, and systems at various phases
of creation and implementation. This workshop paper proposes a platform to
involve the disabled community while building generative AI systems. With this
platform, our aim is to gain insight into the factors that contribute to bias
in the outputs generated by generative AI when used by the disabled community.
Furthermore, we expect to comprehend which algorithmic factors are the main
contributors to the output's incorrectness or irrelevancy. The proposed
platform calls on both disabled and non-disabled people from various
geographical and cultural backgrounds to collaborate asynchronously and
remotely in a democratic approach to decision-making.
Related papers
- SYMBIOSIS: Systems Thinking and Machine Intelligence for Better Outcomes in Society [0.0]
SYMBIOSIS is an AI-powered framework and platform designed to make Systems Thinking accessible for addressing societal challenges.
To address this, we developed a generative co-pilot that translates complex systems representations into natural language.
SYMBIOSIS aims to serve as a foundational step to unlock future research into responsible and society-centered AI.
arXiv Detail & Related papers (2025-03-07T17:07:26Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - AI in Support of Diversity and Inclusion [5.415339913320849]
We look at the challenges and progress in making large language models (LLMs) more transparent, inclusive, and aware of social biases.
We highlight AI's role in identifying biased content in media, which is important for improving representation.
We stress AI systems need diverse and inclusive training data.
arXiv Detail & Related papers (2025-01-16T13:36:24Z) - Disability data futures: Achievable imaginaries for AI and disability data justice [2.0549239024359762]
Data are the medium through which individuals' identities are filtered in contemporary states and systems.
The history of data and AI is often one of disability exclusion, oppression, and the reduction of disabled experience.
This chapter brings together four academics and disability advocates to describe achievable imaginaries for artificial intelligence and disability data justice.
arXiv Detail & Related papers (2024-11-06T13:04:29Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Generative AI trial for nonviolent communication mediation [0.0]
ChatGPT was used in place of the traditional certified trainer to test the possibility of mediating input sentences.
Results indicate that there is potential for the application of generative AI, although not yet at a practical level.
It is hoped that the widespread use of NVC mediation using generative AI will lead to the early realization of a mixbiotic society.
arXiv Detail & Related papers (2023-08-07T06:19:29Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z) - Unpacking the Interdependent Systems of Discrimination: Ableist Bias in
NLP Systems through an Intersectional Lens [20.35460711907179]
We report on various analyses based on word predictions of a large-scale BERT language model.
Statistically significant results demonstrate that people with disabilities can be disadvantaged.
Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.
arXiv Detail & Related papers (2021-10-01T16:40:58Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.