AR-LSAT: Investigating Analytical Reasoning of Text
- URL: http://arxiv.org/abs/2104.06598v2
- Date: Thu, 15 Apr 2021 02:21:45 GMT
- Title: AR-LSAT: Investigating Analytical Reasoning of Text
- Authors: Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang,
Jian Yin, Ming Zhou, Nan Duan
- Abstract summary: We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
- Score: 57.1542673852013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analytical reasoning is an essential and challenging task that requires a
system to analyze a scenario involving a set of particular circumstances and
perform reasoning over it to make conclusions. In this paper, we study the
challenge of analytical reasoning of text and introduce a new dataset
consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to
do well on this task. Furthermore, to address this reasoning challenge, we
design two different baselines: (1) a Transformer-based method which leverages
the state-of-the-art pre-trained language models and (2) Analytical Reasoning
Machine (ARM), a logical-level reasoning framework extracting symbolic
knowledge (e.g, participants, facts, logical functions) to deduce legitimate
solutions. In our experiments, we find that the Transformer-based models
struggle to solve this task as their performance is close to random guess and
ARM achieves better performance by leveraging symbolic knowledge and
interpretable reasoning steps. Results show that both methods still lag far
behind human performance, which leave further space for future research.
Related papers
- Inference-Time Computations for LLM Reasoning and Planning: A Benchmark and Insights [49.42133807824413]
We examine the reasoning and planning capabilities of large language models (LLMs) in solving complex tasks.
Recent advances in inference-time techniques demonstrate the potential to enhance LLM reasoning without additional training.
OpenAI's o1 model shows promising performance through its novel use of multi-step reasoning and verification.
arXiv Detail & Related papers (2025-02-18T04:11:29Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - How Transformers Solve Propositional Logic Problems: A Mechanistic Analysis [16.65073455206535]
Large language models (LLMs) have shown amazing performance on tasks that require planning and reasoning.
Motivated by this, we investigate the internal mechanisms that underpin a network's ability to perform complex logical reasoning.
arXiv Detail & Related papers (2024-11-06T18:35:32Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - From LSAT: The Progress and Challenges of Complex Reasoning [56.07448735248901]
We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
arXiv Detail & Related papers (2021-08-02T05:43:03Z) - Social Commonsense Reasoning with Multi-Head Knowledge Attention [24.70946979449572]
Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
arXiv Detail & Related papers (2020-10-12T10:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.