Views on AI aren't binary -- they're plural
- URL: http://arxiv.org/abs/2312.14230v2
- Date: Mon, 23 Sep 2024 17:23:47 GMT
- Title: Views on AI aren't binary -- they're plural
- Authors: Thorin Bristow, Luke Thorburn, Diana Acosta-Navas,
- Abstract summary: We argue that a simple binary is not an accurate model of AI discourse.
We provide concrete suggestions for how individuals can help avoid the emergence of us-vs-them conflict in the broad community of people working on AI development and governance.
- Score: 0.10241134756773229
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent developments in AI have brought broader attention to tensions between two overlapping communities, "AI Ethics" and "AI Safety." In this article we (i) characterize this false binary, (ii) argue that a simple binary is not an accurate model of AI discourse, and (iii) provide concrete suggestions for how individuals can help avoid the emergence of us-vs-them conflict in the broad community of people working on AI development and governance. While we focus on "AI Ethics" an "AI Safety," the general lessons apply to related tensions, including those between accelerationist ("e/acc") and cautious stances on AI development.
Related papers
- The AI Alignment Paradox [10.674155943520729]
The better we align AI models with our values, the easier we may make it for adversaries to misalign the models.
With AI's increasing real-world impact, it is imperative that a broad community of researchers be aware of the AI alignment paradox.
arXiv Detail & Related papers (2024-05-31T14:06:24Z) - AI Safety: Necessary, but insufficient and possibly problematic [1.6797508081737678]
This article critically examines the recent hype around AI safety.
We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with.
We share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.
arXiv Detail & Related papers (2024-03-26T06:18:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Acceleration AI Ethics, the Debate between Innovation and Safety, and
Stability AI's Diffusion versus OpenAI's Dall-E [0.0]
This presentation responds by reconfiguring ethics as an innovation accelerator.
The work of ethics is embedded in AI development and application, instead of functioning from outside.
arXiv Detail & Related papers (2022-12-04T14:54:13Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The social dilemma in AI development and why we have to solve it [2.707154152696381]
We argue that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices.
We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.
arXiv Detail & Related papers (2021-07-27T17:43:48Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.