AR-LSAT: Investigating Analytical Reasoning of Text
- URL: http://arxiv.org/abs/2104.06598v2
- Date: Thu, 15 Apr 2021 02:21:45 GMT
- Title: AR-LSAT: Investigating Analytical Reasoning of Text
- Authors: Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu, Daya Guo, Jiahai Wang,
Jian Yin, Ming Zhou, Nan Duan
- Abstract summary: We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
- Score: 57.1542673852013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analytical reasoning is an essential and challenging task that requires a
system to analyze a scenario involving a set of particular circumstances and
perform reasoning over it to make conclusions. In this paper, we study the
challenge of analytical reasoning of text and introduce a new dataset
consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to
do well on this task. Furthermore, to address this reasoning challenge, we
design two different baselines: (1) a Transformer-based method which leverages
the state-of-the-art pre-trained language models and (2) Analytical Reasoning
Machine (ARM), a logical-level reasoning framework extracting symbolic
knowledge (e.g, participants, facts, logical functions) to deduce legitimate
solutions. In our experiments, we find that the Transformer-based models
struggle to solve this task as their performance is close to random guess and
ARM achieves better performance by leveraging symbolic knowledge and
interpretable reasoning steps. Results show that both methods still lag far
behind human performance, which leave further space for future research.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Generation of Explanations for Logic Reasoning [0.0]
The research is centred on employing GPT-3.5-turbo to automate the analysis of fortiori arguments.
This thesis makes significant contributions to the fields of artificial intelligence and logical reasoning.
arXiv Detail & Related papers (2023-11-22T15:22:04Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - From LSAT: The Progress and Challenges of Complex Reasoning [56.07448735248901]
We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
arXiv Detail & Related papers (2021-08-02T05:43:03Z) - Social Commonsense Reasoning with Multi-Head Knowledge Attention [24.70946979449572]
Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
arXiv Detail & Related papers (2020-10-12T10:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.