Antagonistic AI
- URL: http://arxiv.org/abs/2402.07350v1
- Date: Mon, 12 Feb 2024 00:44:37 GMT
- Title: Antagonistic AI
- Authors: Alice Cai, Ian Arawjo, Elena L. Glassman
- Abstract summary: We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI.
We consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions.
We lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience.
- Score: 11.25562632407588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vast majority of discourse around AI development assumes that
subservient, "moral" models aligned with "human values" are universally
beneficial -- in short, that good AI is sycophantic AI. We explore the shadow
of the sycophantic paradigm, a design space we term antagonistic AI: AI systems
that are disagreeable, rude, interrupting, confrontational, challenging, etc.
-- embedding opposite behaviors or values. Far from being "bad" or "immoral,"
we consider whether antagonistic AI systems may sometimes have benefits to
users, such as forcing users to confront their assumptions, build resilience,
or develop healthier relational boundaries. Drawing from formative explorations
and a speculative design workshop where participants designed fictional AI
technologies that employ antagonism, we lay out a design space for antagonistic
AI, articulating potential benefits, design techniques, and methods of
embedding antagonistic elements into user experience. Finally, we discuss the
many ethical challenges of this space and identify three dimensions for the
responsible design of antagonistic AI -- consent, context, and framing.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - There and Back Again: The AI Alignment Paradox [10.674155943520729]
The better we align AI models with our values, the easier we make it for adversaries to misalign the models.
With AI's increasing real-world impact, it is imperative that a broad community of researchers be aware of the AI alignment paradox.
arXiv Detail & Related papers (2024-05-31T14:06:24Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Modelos din\^amicos aplicados \`a aprendizagem de valores em
intelig\^encia artificial [0.0]
Several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment.
It is utmost importance that artificial intelligent agents have their values aligned with human values.
Perhaps this difficulty comes from the way we are addressing the problem of expressing values using cognitive methods.
arXiv Detail & Related papers (2020-07-30T00:56:11Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.