How Should AI Interpret Rules? A Defense of Minimally Defeasible
Interpretive Argumentation
- URL: http://arxiv.org/abs/2110.13341v1
- Date: Tue, 26 Oct 2021 00:58:05 GMT
- Title: How Should AI Interpret Rules? A Defense of Minimally Defeasible
Interpretive Argumentation
- Authors: John Licato
- Abstract summary: Real-world rules are unavoidably rife with open-textured terms.
The ability to follow such rules, and to reason about them, is not nearly as clear-cut as it seems on first analysis.
I defend the following answer: Rule-following AI should act in accordance with the interpretation best supported by minimally defeasible interpretive arguments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Can artificially intelligent systems follow rules? The answer might seem an
obvious `yes', in the sense that all (current) AI strictly acts in accordance
with programming code constructed from highly formalized and well-defined
rulesets. But here I refer to the kinds of rules expressed in human language
that are the basis of laws, regulations, codes of conduct, ethical guidelines,
and so on. The ability to follow such rules, and to reason about them, is not
nearly as clear-cut as it seems on first analysis. Real-world rules are
unavoidably rife with open-textured terms, which imbue rules with a possibly
infinite set of possible interpretations. Narrowing down this set requires a
complex reasoning process that is not yet within the scope of contemporary AI.
This poses a serious problem for autonomous AI: If one cannot reason about
open-textured terms, then one cannot reason about (or in accordance with)
real-world rules. And if one cannot reason about real-world rules, then one
cannot: follow human laws, comply with regulations, act in accordance with
written agreements, or even obey mission-specific commands that are anything
more than trivial. But before tackling these problems, we must first answer a
more fundamental question: Given an open-textured rule, what is its correct
interpretation? Or more precisely: How should our artificially intelligent
systems determine which interpretation to consider correct? In this essay, I
defend the following answer: Rule-following AI should act in accordance with
the interpretation best supported by minimally defeasible interpretive
arguments (MDIA).
Related papers
- Developing a Grounded View of AI [26.688384331221343]
The paper examines the behavior of artificial intelligence from engineering points of view to clarify its nature and limits.<n>The paper proposes a methodology to make a sense of discrimination possible and practical to identify the distinctions of the behavior of AI models with three types of decisions.
arXiv Detail & Related papers (2025-11-18T00:39:52Z) - Statutory Construction and Interpretation for Artificial Intelligence [19.65776192762091]
We show how different interpretations of the same rule can lead to inconsistent or unstable model behavior.<n>We propose a computational framework that mirrors two legal mechanisms.<n>Our approach offers a first step toward systematically managing interpretive ambiguity.
arXiv Detail & Related papers (2025-09-01T07:10:22Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - On the consistent reasoning paradox of intelligence and optimal trust in AI: The power of 'I don't know' [79.69412622010249]
Consistent reasoning, which lies at the core of human intelligence, is the ability to handle tasks that are equivalent.
CRP asserts that consistent reasoning implies fallibility -- in particular, human-like intelligence in AI necessarily comes with human-like fallibility.
arXiv Detail & Related papers (2024-08-05T10:06:53Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Le Nozze di Giustizia. Interactions between Artificial Intelligence,
Law, Logic, Language and Computation with some case studies in Traffic
Regulations and Health Care [0.0]
An important aim of this paper is to convey some basics of mathematical logic to the legal community working with Artificial Intelligence.
After analysing what AI is, we decide to delimit ourselves to rule-based AI leaving Neural Networks and Machine Learning aside.
We will see how mathematical logic interacts with legal rule-based AI practice.
arXiv Detail & Related papers (2024-02-09T15:43:31Z) - Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and Reasoning (Extended Version) [8.425874385897831]
SLEEC (social, legal, ethical, empathetic, or cultural) rules aim to facilitate the formulation, verification, and enforcement of rules AI-based and autonomous systems should obey.
To enable their effective use in AI systems, it is necessary to translate these rules systematically into a formal language that supports automated reasoning.
In this study, we first conduct a linguistic analysis of the SLEEC rules pattern, which justifies the translation of SLEEC rules into classical logic.
arXiv Detail & Related papers (2023-12-15T11:23:49Z) - Abstracting Concept-Changing Rules for Solving Raven's Progressive
Matrix Problems [54.26307134687171]
Raven's Progressive Matrix (RPM) is a classic test to realize such ability in machine intelligence by selecting from candidates.
Recent studies suggest that solving RPM in an answer-generation way boosts a more in-depth understanding of rules.
We propose a deep latent variable model for Concept-changing Rule ABstraction (CRAB) by learning interpretable concepts and parsing concept-changing rules in the latent space.
arXiv Detail & Related papers (2023-07-15T07:16:38Z) - When to Make Exceptions: Exploring Language Models as Accounts of Human
Moral Judgment [96.77970239683475]
AI systems need to be able to understand, interpret and predict human moral judgments and decisions.
A central challenge for AI safety is capturing the flexibility of the human moral mind.
We present a novel challenge set consisting of rule-breaking question answering.
arXiv Detail & Related papers (2022-10-04T09:04:27Z) - Automating Defeasible Reasoning in Law [0.0]
We study defeasible reasoning in rule-based systems, in particular about legal norms and contracts.
We identify rule modifier that specify how rules interact and how they can be overridden.
We then define rule transformations that eliminate these modifier, leading to a translation of rules to formulas.
arXiv Detail & Related papers (2022-05-15T17:14:15Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Morality, Machines and the Interpretation Problem: A value-based,
Wittgensteinian approach to building Moral Agents [0.0]
We argue that the attempt to build morality into machines is subject to what we call the Interpretation problem.
We argue that any rule we give the machine is open to infinite interpretation in ways that we might morally disapprove of.
arXiv Detail & Related papers (2021-03-03T22:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.