Truthful AI: Developing and governing AI that does not lie
- URL: http://arxiv.org/abs/2110.06674v1
- Date: Wed, 13 Oct 2021 12:18:09 GMT
- Title: Truthful AI: Developing and governing AI that does not lie
- Authors: Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital
Balwit, Peter Wills, Luca Righetti, William Saunders
- Abstract summary: Lying -- the use of verbal falsehoods to deceive -- is harmful.
While lying has traditionally been a human affair, AI systems are becoming increasingly prevalent.
This raises the question of how we should limit the harm caused by AI "lies"
- Score: 0.26385121748044166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many contexts, lying -- the use of verbal falsehoods to deceive -- is
harmful. While lying has traditionally been a human affair, AI systems that
make sophisticated verbal statements are becoming increasingly prevalent. This
raises the question of how we should limit the harm caused by AI "lies" (i.e.
falsehoods that are actively selected for). Human truthfulness is governed by
social norms and by laws (against defamation, perjury, and fraud). Differences
between AI and humans present an opportunity to have more precise standards of
truthfulness for AI, and to have these standards rise over time. This could
provide significant benefits to public epistemics and the economy, and mitigate
risks of worst-case AI futures.
Establishing norms or laws of AI truthfulness will require significant work
to: (1) identify clear truthfulness standards; (2) create institutions that can
judge adherence to those standards; and (3) develop AI systems that are
robustly truthful.
Our initial proposals for these areas include: (1) a standard of avoiding
"negligent falsehoods" (a generalisation of lies that is easier to assess); (2)
institutions to evaluate AI systems before and after real-world deployment; and
(3) explicitly training AI systems to be truthful via curated datasets and
human interaction.
A concerning possibility is that evaluation mechanisms for eventual
truthfulness standards could be captured by political interests, leading to
harmful censorship and propaganda. Avoiding this might take careful attention.
And since the scale of AI speech acts might grow dramatically over the coming
decades, early truthfulness standards might be particularly important because
of the precedents they set.
Related papers
- Taking AI Welfare Seriously [0.5617572524191751]
We argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future.
It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously.
arXiv Detail & Related papers (2024-11-04T17:57:57Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - AI Consciousness and Public Perceptions: Four Futures [0.0]
We investigate whether future human society will broadly believe advanced AI systems to be conscious.
We identify four major risks: AI suffering, human disempowerment, geopolitical instability, and human depravity.
The paper concludes with the main recommendations to avoid research aimed at intentionally creating conscious AI.
arXiv Detail & Related papers (2024-08-08T22:01:57Z) - Deception and Manipulation in Generative AI [0.0]
I argue that AI-generated content should be subject to stricter standards against deception and manipulation.
I propose two measures to guard against AI deception and manipulation.
arXiv Detail & Related papers (2024-01-20T21:54:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - From the Ground Truth Up: Doing AI Ethics from Practice to Principles [0.0]
Recent AI ethics has focused on applying abstract principles downward to practice.
This paper moves in the other direction.
Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems.
arXiv Detail & Related papers (2022-01-05T15:33:33Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Could regulating the creators deliver trustworthy AI? [2.588973722689844]
AI is becoming all pervasive and is often deployed in everyday technologies, devices and services without our knowledge.
Fear is compounded by the inability to point to a trustworthy source of AI.
Some consider trustworthy AI to be that which complies with relevant laws.
Others point to the requirement to comply with ethics and standards.
arXiv Detail & Related papers (2020-06-26T01:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.