A Defeasible Deontic Calculus for Resolving Norm Conflicts
- URL: http://arxiv.org/abs/2407.04869v1
- Date: Fri, 5 Jul 2024 21:16:16 GMT
- Title: A Defeasible Deontic Calculus for Resolving Norm Conflicts
- Authors: Taylor Olson, Roberto Salas-Damian, Kenneth D. Forbus,
- Abstract summary: We use a deontic calculus to prove that it resolves norm conflicts.
We also reveal a common resolution strategy as a red herring.
This paper thus contributes a theoretically justified axiomatization of norm conflict detection and resolution.
- Score: 7.455546102930911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When deciding how to act, we must consider other agents' norms and values. However, our norms are ever-evolving. We often add exceptions or change our minds, and thus norms can conflict over time. Therefore, to maintain an accurate mental model of other's norms, and thus to avoid social friction, such conflicts must be detected and resolved quickly. Formalizing this process has been the focus of various deontic logics and normative multi-agent systems. We aim to bridge the gap between these two fields here. We contribute a defeasible deontic calculus with inheritance and prove that it resolves norm conflicts. Through this analysis, we also reveal a common resolution strategy as a red herring. This paper thus contributes a theoretically justified axiomatization of norm conflict detection and resolution.
Related papers
- Policy-Adaptable Methods For Resolving Normative Conflicts Through Argumentation and Graph Colouring [0.0]
In a multi-agent system, one may choose to govern the behaviour of an agent by imposing norms.
However, imposing multiple norms on one or more agents may result in situations where these norms conflict over how the agent should behave.
We introduce a new method for resolving normative conflicts through argumentation and graph colouring.
arXiv Detail & Related papers (2025-01-21T00:32:49Z) - Are language models rational? The case of coherence norms and belief revision [63.78798769882708]
We consider logical coherence norms as well as coherence norms tied to the strength of belief in language models.
We argue that rational norms tied to coherence do apply to some language models, but not to others.
arXiv Detail & Related papers (2024-06-05T16:36:21Z) - Strengthening Consistency Results in Modal Logic [0.0]
A fundamental question in modal logic is whether a given theory is consistent, but consistent with what?
A typical way to address this question identifies a choice of background knowledge axioms (say, S4, D, etc.) and then shows the assumptions codified by the theory in question to be consistent with those background axioms.
This paper introduces generic theories for propositional modal logic to address consistency results in a more robust way.
arXiv Detail & Related papers (2023-07-11T07:05:37Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - On the Importance of Gradient Norm in PAC-Bayesian Bounds [92.82627080794491]
We propose a new generalization bound that exploits the contractivity of the log-Sobolev inequalities.
We empirically analyze the effect of this new loss-gradient norm term on different neural architectures.
arXiv Detail & Related papers (2022-10-12T12:49:20Z) - Deontic Meta-Rules [2.241042010144441]
This work extends such a logical framework by considering the deontic aspect.
The resulting logic will not just be able to model policies but also tackle well-known aspects that occur in numerous legal systems.
arXiv Detail & Related papers (2022-09-23T07:48:29Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Normative Disagreement as a Challenge for Cooperative AI [56.34005280792013]
We argue that typical cooperation-inducing learning algorithms fail to cooperate in bargaining problems.
We develop a class of norm-adaptive policies and show in experiments that these significantly increase cooperation.
arXiv Detail & Related papers (2021-11-27T11:37:42Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Diverse Rule Sets [20.170305081348328]
Rule-based systems are experiencing a renaissance owing to their intuitive if-then representation.
We propose a novel approach of inferring diverse rule sets, by optimizing small overlap among decision rules.
We then devise an efficient randomized algorithm, which samples rules that are highly discriminative and have small overlap.
arXiv Detail & Related papers (2020-06-17T14:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.