On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines
- URL: http://arxiv.org/abs/2302.04218v1
- Date: Wed, 8 Feb 2023 17:39:58 GMT
- Title: On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines
- Authors: Jakob Stenseke
- Abstract summary: Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Why should moral philosophers, moral psychologists, and machine ethicists
care about computational complexity? Debates on whether artificial intelligence
(AI) can or should be used to solve problems in ethical domains have mainly
been driven by what AI can or cannot do in terms of human capacities. In this
paper, we tackle the problem from the other end by exploring what kind of moral
machines are possible based on what computational systems can or cannot do. To
do so, we analyze normative ethics through the lens of computational
complexity. First, we introduce computational complexity for the uninitiated
reader and discuss how the complexity of ethical problems can be framed within
Marr's three levels of analysis. We then study a range of ethical problems
based on consequentialism, deontology, and virtue ethics, with the aim of
elucidating the complexity associated with the problems themselves (e.g., due
to combinatorics, uncertainty, strategic dynamics), the computational methods
employed (e.g., probability, logic, learning), and the available resources
(e.g., time, knowledge, learning). The results indicate that most problems the
normative frameworks pose lead to tractability issues in every category
analyzed. Our investigation also provides several insights about the
computational nature of normative ethics, including the differences between
rule- and outcome-based moral strategies, and the implementation-variance with
regard to moral resources. We then discuss the consequences complexity results
have for the prospect of moral machines in virtue of the trade-off between
optimality and efficiency. Finally, we elucidate how computational complexity
can be used to inform both philosophical and cognitive-psychological research
on human morality by advancing the Moral Tractability Thesis (MTT).
Related papers
- Can Artificial Intelligence Embody Moral Values? [0.0]
neutrality thesis holds that technology cannot be laden with values.
In this paper, we argue that artificial intelligence, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis.
Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm.
arXiv Detail & Related papers (2024-08-22T09:39:16Z) - Why should we ever automate moral decision making? [30.428729272730727]
Concerns arise when AI is involved in decisions with significant moral implications.
Moral reasoning lacks a broadly accepted framework.
An alternative approach involves AI learning from human moral decisions.
arXiv Detail & Related papers (2024-07-10T13:59:22Z) - Does Explainable AI Have Moral Value? [0.0]
Explainable AI (XAI) aims to bridge the gap between complex algorithmic systems and human stakeholders.
Current discourse often examines XAI in isolation as either a technological tool, user interface, or policy mechanism.
This paper proposes a unifying ethical framework grounded in moral duties and the concept of reciprocity.
arXiv Detail & Related papers (2023-11-05T15:59:27Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Ethical Frameworks and Computer Security Trolley Problems: Foundations
for Conversations [14.120888473204907]
We make and explore connections between moral questions in computer security research and ethics / moral philosophy.
We do not seek to define what is morally right or wrong, nor do we argue for one framework over another.
arXiv Detail & Related papers (2023-02-28T05:39:17Z) - Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement
Learning [4.2050490361120465]
A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents.
We present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories.
We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation.
arXiv Detail & Related papers (2023-01-20T09:36:42Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.