Expected Utilitarianism
- URL: http://arxiv.org/abs/2008.07321v1
- Date: Sun, 19 Jul 2020 15:44:04 GMT
- Title: Expected Utilitarianism
- Authors: Heather M. Roff
- Abstract summary: We want artificial intelligence (AI) to be beneficial. This is the grounding assumption of most of the attitudes towards AI research.
We want AI to help, not hinder, humans. Yet what exactly this entails in theory and in practice is not immediately apparent.
We have two conclusions that arise from this. First, is that if one believes that a beneficial AI is an ethical AI, then one is committed to a framework that posits 'benefit' is tantamount to the greatest good for the greatest number.
Second, if the AI relies on RL, then the way it reasons about itself, the environment, and other agents
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We want artificial intelligence (AI) to be beneficial. This is the grounding
assumption of most of the attitudes towards AI research. We want AI to be
"good" for humanity. We want it to help, not hinder, humans. Yet what exactly
this entails in theory and in practice is not immediately apparent.
Theoretically, this declarative statement subtly implies a commitment to a
consequentialist ethics. Practically, some of the more promising machine
learning techniques to create a robust AI, and perhaps even an artificial
general intelligence (AGI) also commit one to a form of utilitarianism. In both
dimensions, the logic of the beneficial AI movement may not in fact create
"beneficial AI" in either narrow applications or in the form of AGI if the
ethical assumptions are not made explicit and clear.
Additionally, as it is likely that reinforcement learning (RL) will be an
important technique for machine learning in this area, it is also important to
interrogate how RL smuggles in a particular type of consequentialist reasoning
into the AI: particularly, a brute form of hedonistic act utilitarianism. Since
the mathematical logic commits one to a maximization function, the result is
that an AI will inevitably be seeking more and more rewards. We have two
conclusions that arise from this. First, is that if one believes that a
beneficial AI is an ethical AI, then one is committed to a framework that
posits 'benefit' is tantamount to the greatest good for the greatest number.
Second, if the AI relies on RL, then the way it reasons about itself, the
environment, and other agents, will be through an act utilitarian morality.
This proposition may, or may not, in fact be actually beneficial for humanity.
Related papers
- On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Making AI Intelligible: Philosophical Foundations [0.0]
'Making AI Intelligible' shows that philosophical work on the metaphysics of meaning can help answer these questions.
Author: The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications.
arXiv Detail & Related papers (2024-06-12T12:25:04Z) - A Bibliometric View of AI Ethics Development [4.0998481751764]
We perform a bibliometric analysis of AI Ethics literature for the last 20 years based on keyword search.
We conjecture that the next phase of AI ethics is likely to focus on making AI more machine-like as AI matches or surpasses humans intellectually.
arXiv Detail & Related papers (2024-02-08T16:36:55Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Some Critical and Ethical Perspectives on the Empirical Turn of AI
Interpretability [0.0]
We consider two issues currently faced by Artificial Intelligence development: the lack of ethics and interpretability of AI decisions.
We experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power.
We propose two scenarios for the future development of ethical AI: more external regulation or more liberalization of AI explanations.
arXiv Detail & Related papers (2021-09-20T14:41:50Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - "Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest
Value for us? [4.497097230665825]
Many researchers argue that little substantial progress has been made for AI in recent decades.
Author explains why controversies about AI exist; (2) discriminates two paradigms of AI research, termed "weak AI" and "strong AI"
arXiv Detail & Related papers (2021-03-29T02:57:48Z) - Ethical Considerations for AI Researchers [0.0]
Use of artificial intelligence is growing and expanding into applications that impact people's lives.
There is the potential for harm and we are already seeing examples of that in the world.
While the ethics of AI is not clear-cut, there are guidelines we can consider to minimize the harm we might introduce.
arXiv Detail & Related papers (2020-06-13T04:31:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.