From LSAT: The Progress and Challenges of Complex Reasoning
- URL: http://arxiv.org/abs/2108.00648v1
- Date: Mon, 2 Aug 2021 05:43:03 GMT
- Title: From LSAT: The Progress and Challenges of Complex Reasoning
- Authors: Siyuan Wang, Zhongkun Liu, Wanjun Zhong, Ming Zhou, Zhongyu Wei,
Zhumin Chen and Nan Duan
- Abstract summary: We study the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension.
We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests.
- Score: 56.07448735248901
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex reasoning aims to draw a correct inference based on complex rules. As
a hallmark of human intelligence, it involves a degree of explicit reading
comprehension, interpretation of logical knowledge and complex rule
application. In this paper, we take a step forward in complex reasoning by
systematically studying the three challenging and domain-general tasks of the
Law School Admission Test (LSAT), including analytical reasoning, logical
reasoning and reading comprehension. We propose a hybrid reasoning system to
integrate these three tasks and achieve impressive overall performance on the
LSAT tests. The experimental results demonstrate that our system endows itself
a certain complex reasoning ability, especially the fundamental reading
comprehension and challenging logical reasoning capacities. Further analysis
also shows the effectiveness of combining the pre-trained models with the
task-specific reasoning module, and integrating symbolic knowledge into
discrete interpretable reasoning steps in complex reasoning. We further shed a
light on the potential future directions, like unsupervised symbolic knowledge
extraction, model interpretability, few-shot learning and comprehensive
benchmark for complex reasoning.
Related papers
- LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Elevating Legal LLM Responses: Harnessing Trainable Logical Structures and Semantic Knowledge with Legal Reasoning [19.477062052536887]
We propose the Logical-Semantic Integration Model (LSIM), a supervised framework that bridges semantic and logical coherence.
LSIM comprises three components: reinforcement learning predicts a structured fact-rule chain for each question, a trainable Deep Structured Semantic Model (DSSM) retrieves the most relevant candidate questions and in-answer learning generates the final answer.
Our experiments on a real-world legal dataset QA-validated through both automated metrics and human evaluation-demonstrate that LSIM significantly enhances accuracy and reliability compared to existing methods.
arXiv Detail & Related papers (2025-02-11T19:33:07Z) - ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning [92.76959707441954]
We introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM reasoning performance.
ZebraLogic enables the generation of puzzles with controllable and quantifiable complexity.
Our results reveal a significant decline in accuracy as problem complexity grows.
arXiv Detail & Related papers (2025-02-03T06:44:49Z) - SR-FoT: A Syllogistic-Reasoning Framework of Thought for Large Language Models Tackling Knowledge-based Reasoning Tasks [42.392103712958445]
Large Language Models (LLMs) might not follow the correct reasoning paths.
We propose a multi-stage Syllogistic-Reasoning Framework of Thought (SR-FoT)
Our SR-FoT begins by interpreting the question and then uses the interpretation and the original question to propose a suitable major premise.
arXiv Detail & Related papers (2025-01-20T17:00:41Z) - Can Large Language Models Reason? A Characterization via 3-SAT [11.422434149376478]
Large Language Models (LLMs) have been touted as AI models possessing advanced reasoning abilities.
Recent works have shown that LLMs often bypass true reasoning using shortcuts, sparking skepticism.
We propose an experimental protocol centered on 3-SAT -- the NP-complete problem lying at the core of logical reasoning and constraint satisfaction tasks.
arXiv Detail & Related papers (2024-08-13T21:54:10Z) - CLR-Fact: Evaluating the Complex Logical Reasoning Capability of Large Language Models over Factual Knowledge [44.59258397967782]
Large language models (LLMs) have demonstrated impressive capabilities across various natural language processing tasks.
We present a systematic evaluation of state-of-the-art LLMs' complex logical reasoning abilities.
We find that LLMs excel at reasoning over general world knowledge but face significant challenges with specialized domain-specific knowledge.
arXiv Detail & Related papers (2024-07-30T05:40:32Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.