Low impact agency: review and discussion
- URL: http://arxiv.org/abs/2303.03139v1
- Date: Mon, 6 Mar 2023 13:55:42 GMT
- Title: Low impact agency: review and discussion
- Authors: Danilo Naiff, Shashwat Goel
- Abstract summary: Powerful artificial intelligence poses an existential threat if the AI decides to drastically change the world in pursuit of its goals.
The hope of low-impact artificial intelligence is to incentivize AI to not do that just because this causes a large impact in the world.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Powerful artificial intelligence poses an existential threat if the AI
decides to drastically change the world in pursuit of its goals. The hope of
low-impact artificial intelligence is to incentivize AI to not do that just
because this causes a large impact in the world. In this work, we first review
the concept of low-impact agency and previous proposals to approach the
problem, and then propose future research directions in the topic, with the
goal to ensure low-impactedness is useful in making AI safe.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Artificial Intelligence: Arguments for Catastrophic Risk [0.0]
We review two influential arguments purporting to show how AI could pose catastrophic risks.
The first argument -- the Problem of Power-Seeking -- claims that advanced AI systems are likely to engage in dangerous power-seeking behavior.
The second argument claims that the development of human-level AI will unlock rapid further progress.
arXiv Detail & Related papers (2024-01-27T19:34:13Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Positive AI: Key Challenges in Designing Artificial Intelligence for
Wellbeing [0.5461938536945723]
Many people are increasingly worried about AI's impact on their lives.
To ensure AI progresses beneficially, some researchers have proposed "wellbeing" as a key objective to govern AI.
This article addresses key challenges in designing AI for wellbeing.
arXiv Detail & Related papers (2023-04-12T12:43:00Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Ethical Considerations for AI Researchers [0.0]
Use of artificial intelligence is growing and expanding into applications that impact people's lives.
There is the potential for harm and we are already seeing examples of that in the world.
While the ethics of AI is not clear-cut, there are guidelines we can consider to minimize the harm we might introduce.
arXiv Detail & Related papers (2020-06-13T04:31:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.