Factoring Statutory Reasoning as Language Understanding Challenges
- URL: http://arxiv.org/abs/2105.07903v1
- Date: Mon, 17 May 2021 14:33:02 GMT
- Title: Factoring Statutory Reasoning as Language Understanding Challenges
- Authors: Nils Holzenberger and Benjamin Van Durme
- Abstract summary: We decompose statutory reasoning into four types of language-understanding challenge problems.
We introduce concepts and structure found in Prolog programs.
Models for statutory reasoning are shown to benefit from the additional structure.
- Score: 48.13180364616141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statutory reasoning is the task of determining whether a legal statute,
stated in natural language, applies to the text description of a case. Prior
work introduced a resource that approached statutory reasoning as a monolithic
textual entailment problem, with neural baselines performing nearly at-chance.
To address this challenge, we decompose statutory reasoning into four types of
language-understanding challenge problems, through the introduction of concepts
and structure found in Prolog programs. Augmenting an existing benchmark, we
provide annotations for the four tasks, and baselines for three of them. Models
for statutory reasoning are shown to benefit from the additional structure,
improving on prior baselines. Further, the decomposition into subtasks
facilitates finer-grained model diagnostics and clearer incremental progress.
Related papers
- Scaling Synthetic Logical Reasoning Datasets with Context-Sensitive Declarative Grammars [0.6537995248511139]
We present a declarative framework with flexible context-sensitive rules binding multiple languages.
We construct first-order logic problems by selecting up to 32 premises and one hypothesis.
We demonstrate that using semantic constraints during generation and careful English verbalization of predicates enhances logical reasoning without hurting natural English tasks.
arXiv Detail & Related papers (2024-06-16T18:10:49Z) - Reframing Tax Law Entailment as Analogical Reasoning [38.50170507450238]
We re-frame statutory reasoning as an analogy task, where each instance of the analogy task involves a combination of two instances of statutory reasoning.
This increases the dataset size by two orders of magnitude, and introduces an element of interpretability.
We show that this task is roughly as difficult to Natural Language Processing models as the original task.
arXiv Detail & Related papers (2024-01-12T17:37:07Z) - LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models [63.14196038655506]
We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs)
Our methodology reveals significant gaps in LLMs' learning of logical rules, with identified reasoning failures ranging from 29% to 90% across different models.
We leverage these findings to construct targeted demonstration examples and fine-tune data, notably enhancing logical reasoning in models like GPT-4o by up to 5%.
arXiv Detail & Related papers (2024-01-01T13:53:53Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Explainable Verbal Reasoner Plus (EVR+): A Natural Language Reasoning
Framework that Supports Diverse Compositional Reasoning [41.99368317059466]
We present Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that enhances language models' compositional reasoning ability.
Our framework supports more diverse types of reasoning such as nested loops and different types of recursion.
Results show that our reasoning framework can enhance the language model's compositional generalization performance on the five tasks.
arXiv Detail & Related papers (2023-04-28T19:27:26Z) - STREET: A Multi-Task Structured Reasoning and Explanation Benchmark [56.555662318619135]
We introduce a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
We expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer.
arXiv Detail & Related papers (2023-02-13T22:34:02Z) - Unifying Structure Reasoning and Language Model Pre-training for Complex
Reasoning [26.811507121199323]
This paper proposes a unified learning framework that combines explicit structure reasoning and language pre-training to endow PLMs with the structure reasoning skill.
It first identifies several elementary structures within contexts to construct structured queries and performs step-by-step reasoning along the queries to identify the answer entity.
Experimental results on four datasets demonstrate that the proposed model achieves significant improvements in complex reasoning tasks involving diverse structures.
arXiv Detail & Related papers (2023-01-21T08:18:11Z) - From LSAT: The Progress and Challenges of Complex Reasoning [56.07448735248901]
We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
arXiv Detail & Related papers (2021-08-02T05:43:03Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z) - A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering [37.66486350122862]
This paper investigates the performance of natural language understanding approaches on statutory reasoning.
We introduce a dataset, together with a legal-domain text corpus.
We contrast this with a hand-constructed Prolog-based system, designed to fully solve the task.
arXiv Detail & Related papers (2020-05-11T16:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.