Deontic Paradoxes in ASP with Weak Constraints
- URL: http://arxiv.org/abs/2308.15870v1
- Date: Wed, 30 Aug 2023 08:56:54 GMT
- Title: Deontic Paradoxes in ASP with Weak Constraints
- Authors: Christian Hatschka (TU Vienna), Agata Ciabattoni (TU Vienna), Thomas
Eiter (TU Vienna)
- Abstract summary: We show how to encode and resolve several well-known deontic paradoxes utilizing weak constraints.
We present a methodology for translating normative systems in ASP with weak constraints.
This methodology is applied to "ethical" versions of Pac-man, where we obtain a comparable performance with related works, but ethically preferable results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of powerful AI technology for a range of applications that are
sensitive to legal, social, and ethical norms demands decision-making support
in presence of norms and regulations. Normative reasoning is the realm of
deontic logics, that are challenged by well-known benchmark problems (deontic
paradoxes), and lack efficient computational tools. In this paper, we use
Answer Set Programming (ASP) for addressing these shortcomings and showcase how
to encode and resolve several well-known deontic paradoxes utilizing weak
constraints. By abstracting and generalizing this encoding, we present a
methodology for translating normative systems in ASP with weak constraints.
This methodology is applied to "ethical" versions of Pac-man, where we obtain a
comparable performance with related works, but ethically preferable results.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - Normative Requirements Operationalization with Large Language Models [3.456725053685842]
Normative non-functional requirements specify constraints that a system must observe in order to avoid violations of social, legal, ethical, empathetic, and cultural norms.
Recent research has tackled this challenge using a domain-specific language to specify normative requirements.
We propose a complementary approach that uses Large Language Models to extract semantic relationships between abstract representations of system capabilities.
arXiv Detail & Related papers (2024-04-18T17:01:34Z) - Automated legal reasoning with discretion to act using s(LAW) [0.294944680995069]
ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms.
We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns.
We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case.
arXiv Detail & Related papers (2024-01-25T21:11:08Z) - Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and Reasoning (Extended Version) [8.425874385897831]
SLEEC (social, legal, ethical, empathetic, or cultural) rules aim to facilitate the formulation, verification, and enforcement of rules AI-based and autonomous systems should obey.
To enable their effective use in AI systems, it is necessary to translate these rules systematically into a formal language that supports automated reasoning.
In this study, we first conduct a linguistic analysis of the SLEEC rules pattern, which justifies the translation of SLEEC rules into classical logic.
arXiv Detail & Related papers (2023-12-15T11:23:49Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Toward Unified Controllable Text Generation via Regular Expression
Instruction [56.68753672187368]
Our paper introduces Regular Expression Instruction (REI), which utilizes an instruction-based mechanism to fully exploit regular expressions' advantages to uniformly model diverse constraints.
Our method only requires fine-tuning on medium-scale language models or few-shot, in-context learning on large language models, and requires no further adjustment when applied to various constraint combinations.
arXiv Detail & Related papers (2023-09-19T09:05:14Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Reinforcement Learning Guided by Provable Normative Compliance [0.0]
Reinforcement learning (RL) has shown promise as a tool for engineering safe, ethical, or legal behaviour in autonomous agents.
We use multi-objective RL (MORL) to balance the ethical objective of avoiding violations with a non-ethical objective.
We show that our approach works for a multiplicity of MORL techniques, and show that it is effective regardless of the magnitude of the punishment we assign.
arXiv Detail & Related papers (2022-03-30T13:10:55Z) - Probably Approximately Correct Constrained Learning [135.48447120228658]
We develop a generalization theory based on the probably approximately correct (PAC) learning framework.
We show that imposing a learner does not make a learning problem harder in the sense that any PAC learnable class is also a constrained learner.
We analyze the properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification.
arXiv Detail & Related papers (2020-06-09T19:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.