Harnessing Incremental Answer Set Solving for Reasoning in
Assumption-Based Argumentation
- URL: http://arxiv.org/abs/2108.04192v1
- Date: Mon, 9 Aug 2021 17:34:05 GMT
- Title: Harnessing Incremental Answer Set Solving for Reasoning in
Assumption-Based Argumentation
- Authors: Tuomo Lehtonen, Johannes P. Wallner, Matti J\"arvisalo
- Abstract summary: Assumption-based argumentation (ABA) is a central structured argumentation formalism.
Recent advances in answer set programming (ASP) enable efficiently solving NP-hard reasoning tasks of ABA in practice.
- Score: 1.5469452301122177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Assumption-based argumentation (ABA) is a central structured argumentation
formalism. As shown recently, answer set programming (ASP) enables efficiently
solving NP-hard reasoning tasks of ABA in practice, in particular in the
commonly studied logic programming fragment of ABA. In this work, we harness
recent advances in incremental ASP solving for developing effective algorithms
for reasoning tasks in the logic programming fragment of ABA that are
presumably hard for the second level of the polynomial hierarchy, including
skeptical reasoning under preferred semantics as well as preferential
reasoning. In particular, we develop non-trivial counterexample-guided
abstraction refinement procedures based on incremental ASP solving for these
tasks. We also show empirically that the procedures are significantly more
effective than previously proposed algorithms for the tasks.
This paper is under consideration for acceptance in TPLP.
Related papers
- Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization [50.485788083202124]
Reinforcement Learning (RL) plays a crucial role in aligning large language models with human preferences and improving their ability to perform complex tasks.
We introduce Direct Q-function Optimization (DQO), which formulates the response generation process as a Markov Decision Process (MDP) and utilizes the soft actor-critic (SAC) framework to optimize a Q-function directly parameterized by the language model.
Experimental results on two math problem-solving datasets, GSM8K and MATH, demonstrate that DQO outperforms previous methods, establishing it as a promising offline reinforcement learning approach for aligning language models.
arXiv Detail & Related papers (2024-10-11T23:29:20Z) - Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought [61.588465852846646]
Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs)
In this work, we introduce a novel reasoning boundary framework (RBF) to address these challenges.
arXiv Detail & Related papers (2024-10-08T05:26:28Z) - Learning Brave Assumption-Based Argumentation Frameworks via ASP [11.768331785549947]
Assumption-based Argumentation (ABA) is advocated as a unifying formalism for non-monotonic reasoning.
In this paper we focus on the problem of automating their learning from background knowledge and positive/negative examples.
We present a novel algorithm based on transformation rules (such as Rote Learning, Folding, Assumption Introduction and Fact Subsumption) and an implementation thereof that makes use of Answer Set Programming.
arXiv Detail & Related papers (2024-08-19T16:13:35Z) - Planning with OWL-DL Ontologies (Extended Version) [6.767885381740952]
We present a black-box that supports the full power expressive DL.
Our main algorithm relies on rewritings of the OWL-mediated planning specifications into PDDL.
We evaluate our implementation on benchmark sets from several domains.
arXiv Detail & Related papers (2024-08-14T13:27:02Z) - H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Instantiations and Computational Aspects of Non-Flat Assumption-based Argumentation [18.32141673219938]
We study an instantiation-based approach for reasoning in possibly non-flat ABA.
We propose two algorithmic approaches for reasoning in possibly non-flat ABA.
arXiv Detail & Related papers (2024-04-17T14:36:47Z) - Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing [61.98556945939045]
We propose a framework to learn planning-based reasoning through Direct Preference Optimization (DPO) on collected trajectories.
Our results on challenging logical reasoning benchmarks demonstrate the effectiveness of our learning framework.
arXiv Detail & Related papers (2024-02-01T15:18:33Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Conflict-driven Inductive Logic Programming [3.29505746524162]
The goal of Inductive Logic Programming (ILP) is to learn a program that explains a set of examples.
Until recently, most research on ILP targeted learning Prolog programs.
The ILASP system instead learns Answer Set Programs (ASP)
arXiv Detail & Related papers (2020-12-31T20:24:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.