AI and Blackness: Towards moving beyond bias and representation
- URL: http://arxiv.org/abs/2111.03687v1
- Date: Fri, 5 Nov 2021 18:24:54 GMT
- Title: AI and Blackness: Towards moving beyond bias and representation
- Authors: Christopher L. Dancy and P. Khalil Saucier
- Abstract summary: We argue that AI ethics must move beyond the concepts of race-based representation and bias.
Antiblackness in AI requires more of an examination of the ontological space that provides a foundation for the design, development, and deployment of AI systems.
- Score: 0.8223798883838329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we argue that AI ethics must move beyond the concepts of
race-based representation and bias, and towards those that probe the deeper
relations that impact how these systems are designed, developed, and deployed.
Many recent discussions on ethical considerations of bias in AI systems have
centered on racial bias. We contend that antiblackness in AI requires more of
an examination of the ontological space that provides a foundation for the
design, development, and deployment of AI systems. We examine what this
contention means from the perspective of the sociocultural context in which AI
systems are designed, developed, and deployed and focus on intersections with
anti-Black racism (antiblackness). To bring these multiple perspectives
together and show an example of antiblackness in the face of attempts at
de-biasing, we discuss results from auditing an existing open-source semantic
network (ConceptNet). We use this discussion to further contextualize
antiblackness in design, development, and deployment of AI systems and suggest
questions one may ask when attempting to combat antiblackness in AI systems.
Related papers
- AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.
The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Antagonistic AI [11.25562632407588]
We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI.
We consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions.
We lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience.
arXiv Detail & Related papers (2024-02-12T00:44:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Painting the black box white: experimental findings from applying XAI to
an ECG reading setting [0.13124513975412253]
The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI)
We focus on the cognitive dimension of users' perception of explanations and XAI systems.
arXiv Detail & Related papers (2022-10-27T07:47:50Z) - Using a Cognitive Architecture to consider antiblackness in design and
development of AI systems [0.548253258922555]
How might we use cognitive modeling to consider the ways in which antiblackness, and racism more broadly, impact the design and development of AI systems?
We use the ACT-R/Phi cognitive architecture and an existing knowledge graph system, ConceptNet, to consider this question.
arXiv Detail & Related papers (2022-07-01T19:39:13Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.