Question Answering with LLMs and Learning from Answer Sets
- URL: http://arxiv.org/abs/2509.16590v1
- Date: Sat, 20 Sep 2025 09:26:44 GMT
- Title: Question Answering with LLMs and Learning from Answer Sets
- Authors: Manuel Borroto, Katie Gallagher, Antonio Ielo, Irfan Kareem, Francesco Ricca, Alessandra Russo,
- Abstract summary: Large Language Models (LLMs) excel at understanding natural language but struggle with explicit commonsense reasoning.<n>We introduce LLM2LAS, a hybrid system that effectively combines the natural language understanding capabilities of LLMs, the rule induction power of the Learning from Answer Sets system ILASP, and the formal reasoning strengths of Answer Set Programming (ASP)<n> Empirical results outline the strengths and weaknesses of our automatic approach for learning and reasoning in a story-based question answering benchmark.
- Score: 42.556688510857335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) excel at understanding natural language but struggle with explicit commonsense reasoning. A recent trend of research suggests that the combination of LLM with robust symbolic reasoning systems can overcome this problem on story-based question answering tasks. In this setting, existing approaches typically depend on human expertise to manually craft the symbolic component. We argue, however, that this component can also be automatically learned from examples. In this work, we introduce LLM2LAS, a hybrid system that effectively combines the natural language understanding capabilities of LLMs, the rule induction power of the Learning from Answer Sets (LAS) system ILASP, and the formal reasoning strengths of Answer Set Programming (ASP). LLMs are used to extract semantic structures from text, which ILASP then transforms into interpretable logic rules. These rules allow an ASP solver to perform precise and consistent reasoning, enabling correct answers to previously unseen questions. Empirical results outline the strengths and weaknesses of our automatic approach for learning and reasoning in a story-based question answering benchmark.
Related papers
- Bridging Natural Language and ASP: A Hybrid Approach Using LLMs and AMR Parsing [0.14658400971135646]
This paper proposes a novel method of translating unconstrained English into ASP programs for logic puzzles.<n>Everything from ASP rules, facts, and constraints is generated to fully represent and solve the desired problem.
arXiv Detail & Related papers (2025-11-11T19:25:44Z) - Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning [55.55968342644846]
Large Language Models (LLMs) achieve excellent performance in natural language reasoning tasks through pre-training on vast unstructured text.<n>We propose the textitLogits-to-Logic framework, which incorporates logits strengthening and logits filtering as core modules to correct logical defects in LLM outputs.
arXiv Detail & Related papers (2025-11-11T07:08:27Z) - LLM+AL: Bridging Large Language Models and Action Languages for Complex Reasoning about Actions [7.575628120822444]
"LLM+AL" is a method that bridges the natural language understanding capabilities of LLMs with the symbolic reasoning strengths of action languages.<n>We compare "LLM+AL" against state-of-the-art LLMs, including ChatGPT-4, Claude 3 Opus, Gemini Ultra 1.0, and o1-preview.<n>Our findings indicate that, although all methods exhibit errors, LLM+AL, with relatively minimal human corrections, consistently leads to correct answers.
arXiv Detail & Related papers (2025-01-01T13:20:01Z) - RuAG: Learned-rule-augmented Generation for Large Language Models [62.64389390179651]
We propose a novel framework, RuAG, to automatically distill large volumes of offline data into interpretable first-order logic rules.
We evaluate our framework on public and private industrial tasks, including natural language processing, time-series, decision-making, and industrial tasks.
arXiv Detail & Related papers (2024-11-04T00:01:34Z) - Language Agents Meet Causality -- Bridging LLMs and Causal World Models [50.79984529172807]
We propose a framework that integrates causal representation learning with large language models.
This framework learns a causal world model, with causal variables linked to natural language expressions.
We evaluate the framework on causal inference and planning tasks across temporal scales and environmental complexities.
arXiv Detail & Related papers (2024-10-25T18:36:37Z) - Multi-Step Reasoning with Large Language Models, a Survey [2.831296564800826]
This paper reviews the field of multi-step reasoning with LLMs.<n>We propose a taxonomy that identifies different ways to generate, evaluate, and control multi-step reasoning.<n>We find that multi-step reasoning approaches have progressed beyond math word problems, and can now successfully solve challenges in logic, games, and robotics.
arXiv Detail & Related papers (2024-07-16T08:49:35Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - An In-Context Schema Understanding Method for Knowledge Base Question
Answering [70.87993081445127]
Large Language Models (LLMs) have shown strong capabilities in language understanding and can be used to solve this task.
Existing methods bypass this challenge by initially employing LLMs to generate drafts of logic forms without schema-specific details.
We propose a simple In-Context Understanding (ICSU) method that enables LLMs to directly understand schemas by leveraging in-context learning.
arXiv Detail & Related papers (2023-10-22T04:19:17Z) - Reliable Natural Language Understanding with Large Language Models and
Answer Set Programming [0.0]
Large language models (LLMs) are able to leverage patterns in the text to solve a variety of NLP tasks, but fall short in problems that require reasoning.
We propose STAR, a framework that combines LLMs with Answer Set Programming (ASP)
Goal-directed ASP is then employed to reliably reason over this knowledge.
arXiv Detail & Related papers (2023-02-07T22:37:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.