Automating Defeasible Reasoning in Law
- URL: http://arxiv.org/abs/2205.07335v1
- Date: Sun, 15 May 2022 17:14:15 GMT
- Title: Automating Defeasible Reasoning in Law
- Authors: How Khang Lim, Avishkar Mahajan, Martin Strecker, Meng Weng Wong
- Abstract summary: We study defeasible reasoning in rule-based systems, in particular about legal norms and contracts.
We identify rule modifier that specify how rules interact and how they can be overridden.
We then define rule transformations that eliminate these modifier, leading to a translation of rules to formulas.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The paper studies defeasible reasoning in rule-based systems, in particular
about legal norms and contracts. We identify rule modifiers that specify how
rules interact and how they can be overridden. We then define rule
transformations that eliminate these modifiers, leading in the end to a
translation of rules to formulas. For reasoning with and about rules, we
contrast two approaches, one in a classical logic with SMT solvers as proof
engines, one in a non-monotonic logic with Answer Set Programming solvers.
Related papers
- Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference [20.057611113206324]
We study how to subvert large language models (LLMs) from following prompt-specified rules.
We prove that although LLMs can faithfully follow such rules, maliciously crafted prompts can mislead even idealized, theoretically constructed models.
arXiv Detail & Related papers (2024-06-21T19:18:16Z) - Case-Based or Rule-Based: How Do Transformers Do the Math? [24.17722967327729]
We study whether transformers use rule-based or case-based reasoning for math problems.
We provide explicit rules in the input and then instruct transformers to recite and follow the rules step by step.
The significant improvement demonstrates that teaching LLMs to use rules explicitly helps them learn rule-based reasoning and generalize better in length.
arXiv Detail & Related papers (2024-02-27T17:41:58Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Chain of Logic: Rule-Based Reasoning with Large Language Models [10.017812995997753]
Rule-based reasoning enables us to draw conclusions by accurately applying a rule to a set of facts.
We introduce a new prompting method, Chain of Logic, which elicits rule-based reasoning through decomposition and recomposition.
We evaluate chain of logic across eight rule-based reasoning tasks involving three distinct compositional rules from the LegalBench benchmark.
arXiv Detail & Related papers (2024-02-16T01:54:43Z) - Large Language Models can Learn Rules [106.40747309894236]
We present Hypotheses-to-Theories (HtT), a framework that learns a rule library for reasoning with large language models (LLMs)
Experiments on relational reasoning, numerical reasoning and concept learning problems show that HtT improves existing prompting methods.
The learned rules are also transferable to different models and to different forms of the same problem.
arXiv Detail & Related papers (2023-10-10T23:07:01Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Deontic Meta-Rules [2.241042010144441]
This work extends such a logical framework by considering the deontic aspect.
The resulting logic will not just be able to model policies but also tackle well-known aspects that occur in numerous legal systems.
arXiv Detail & Related papers (2022-09-23T07:48:29Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Open Rule Induction [2.1248439796866228]
Language model (LM)-based rule generation are proposed to enhance the expressive power of the rules.
We argue that, while KB-based methods inducted rules by discovering data commonalities, the current LM-based methods are "learning rules from rules"
In this paper, we propose the open rule induction problem, which aims to induce open rules utilizing the knowledge in LMs.
arXiv Detail & Related papers (2021-10-26T11:20:24Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.