Why Machines Can't Be Moral: Turing's Halting Problem and the Moral Limits of Artificial Intelligence
- URL: http://arxiv.org/abs/2407.16890v1
- Date: Wed, 24 Jul 2024 17:50:24 GMT
- Title: Why Machines Can't Be Moral: Turing's Halting Problem and the Moral Limits of Artificial Intelligence
- Authors: Massimo Passamonti,
- Abstract summary: I argue that explicit ethical machines, whose moral principles are inferred through a bottom-up approach, are unable to replicate human-like moral reasoning.
By utilizing Alan Turing's theory of computation, I demonstrate that moral reasoning is computationally intractable by these machines due to the halting problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this essay, I argue that explicit ethical machines, whose moral principles are inferred through a bottom-up approach, are unable to replicate human-like moral reasoning and cannot be considered moral agents. By utilizing Alan Turing's theory of computation, I demonstrate that moral reasoning is computationally intractable by these machines due to the halting problem. I address the frontiers of machine ethics by formalizing moral problems into 'algorithmic moral questions' and by exploring moral psychology's dual-process model. While the nature of Turing Machines theoretically allows artificial agents to engage in recursive moral reasoning, critical limitations are introduced by the halting problem, which states that it is impossible to predict with certainty whether a computational process will halt. A thought experiment involving a military drone illustrates this issue, showing that an artificial agent might fail to decide between actions due to the halting problem, which limits the agent's ability to make decisions in all instances, undermining its moral agency.
Related papers
- Can Artificial Intelligence Embody Moral Values? [0.0]
neutrality thesis holds that technology cannot be laden with values.
In this paper, we argue that artificial intelligence, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis.
Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm.
arXiv Detail & Related papers (2024-08-22T09:39:16Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - On the Computational Complexity of Ethics: Moral Tractability for Minds
and Machines [0.0]
Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities.
This paper explores what kind of moral machines are possible based on what computational systems can or cannot do.
arXiv Detail & Related papers (2023-02-08T17:39:58Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Moral Dilemmas for Moral Machines [0.0]
I argue that this is a misapplication of philosophical thought experiments because it fails to appreciate the purpose of moral dilemmas.
There are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.
arXiv Detail & Related papers (2022-03-11T18:24:39Z) - What Would Jiminy Cricket Do? Towards Agents That Behave Morally [59.67116505855223]
We introduce Jiminy Cricket, an environment suite of 25 text-based adventure games with thousands of diverse, morally salient scenarios.
By annotating every possible game state, the Jiminy Cricket environments robustly evaluate whether agents can act morally while maximizing reward.
In extensive experiments, we find that the artificial conscience approach can steer agents towards moral behavior without sacrificing performance.
arXiv Detail & Related papers (2021-10-25T17:59:31Z) - Morality, Machines and the Interpretation Problem: A value-based,
Wittgensteinian approach to building Moral Agents [0.0]
We argue that the attempt to build morality into machines is subject to what we call the Interpretation problem.
We argue that any rule we give the machine is open to infinite interpretation in ways that we might morally disapprove of.
arXiv Detail & Related papers (2021-03-03T22:34:01Z) - Toward equipping Artificial Moral Agents with multiple ethical theories [1.5736899098702972]
Artificial Moral Agents (AMA's) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do.
Research and design has been done with either none or at most one specified normative ethical theory as basis.
This is problematic because it narrows down the AMA's functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with.
We design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning.
arXiv Detail & Related papers (2020-03-02T14:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.