User Guided Abductive Proof Generation for Answer Set Programming
Queries (Extended Version)
- URL: http://arxiv.org/abs/2209.07948v1
- Date: Fri, 16 Sep 2022 14:06:12 GMT
- Title: User Guided Abductive Proof Generation for Answer Set Programming
Queries (Extended Version)
- Authors: Avishkar Mahajan and Martin Strecker and Meng Weng Wong
- Abstract summary: We present a method for generating possible proofs of a query with respect to a given Answer Set Programming (ASP) rule set.
Given a (possibly empty) set of user provided facts, our method infers any additional facts that may be needed for the entailment of a query.
We also present a method to generate a set of directed edges corresponding to the justification graph for the query.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a method for generating possible proofs of a query with respect to
a given Answer Set Programming (ASP) rule set using an abductive process where
the space of abducibles is automatically constructed just from the input rules
alone. Given a (possibly empty) set of user provided facts, our method infers
any additional facts that may be needed for the entailment of a query and then
outputs these extra facts, without the user needing to explicitly specify the
space of all abducibles. We also present a method to generate a set of directed
edges corresponding to the justification graph for the query. Furthermore,
through different forms of implicit term substitution, our method can take user
provided facts into account and suitably modify the abductive solutions. Past
work on abduction has been primarily based on goal directed methods. However
these methods can result in solvers that are not truly declarative. Much less
work has been done on realizing abduction in a bottom up solver like the Clingo
ASP solver. We describe novel ASP programs which can be run directly in Clingo
to yield the abductive solutions and directed edge sets without needing to
modify the underlying solving engine.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - IASCAR: Incremental Answer Set Counting by Anytime Refinement [18.761758874408557]
This paper introduces a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models.
In a preliminary empirical analysis, we demonstrate promising results.
arXiv Detail & Related papers (2023-11-13T10:53:48Z) - Hypothesis Search: Inductive Reasoning with Language Models [39.03846394586811]
Recent work evaluates large language models on inductive reasoning tasks by directly prompting them yielding "in context learning"
This works well for straightforward inductive tasks but performs poorly on complex tasks such as the Abstraction and Reasoning Corpus (ARC)
In this work, we propose to improve the inductive reasoning ability of LLMs by generating explicit hypotheses at multiple levels of abstraction.
arXiv Detail & Related papers (2023-09-11T17:56:57Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Natural Language Deduction with Incomplete Information [43.93269297653265]
We propose a new system that can handle the underspecified setting where not all premises are stated at the outset.
By using a natural language generation model to abductively infer a premise given another premise and a conclusion, we can impute missing pieces of evidence needed for the conclusion to be true.
arXiv Detail & Related papers (2022-11-01T17:27:55Z) - Discovering Non-monotonic Autoregressive Orderings with Variational
Inference [67.27561153666211]
We develop an unsupervised parallelizable learner that discovers high-quality generation orders purely from training data.
We implement the encoder as a Transformer with non-causal attention that outputs permutations in one forward pass.
Empirical results in language modeling tasks demonstrate that our method is context-aware and discovers orderings that are competitive with or even better than fixed orders.
arXiv Detail & Related papers (2021-10-27T16:08:09Z) - Generating Explainable Rule Sets from Tree-Ensemble Learning Methods by
Answer Set Programming [9.221315229933532]
We propose a method for generating explainable rule sets from tree-ensemble learners using Answer Set Programming (ASP)
We adopt a decompositional approach where the split structures of the base decision trees are exploited in the construction of rules.
We show how user-defined constraints and preferences can be represented declaratively in ASP to allow for transparent and flexible rule set generation.
arXiv Detail & Related papers (2021-09-17T01:47:38Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization [17.498283247757445]
We build on a recently proposed method for explaining solutions of constraint satisfaction problems.
An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used.
We tackle two emerging questions, namely how to generate explanations that are provably optimal and how to generate them efficiently.
arXiv Detail & Related papers (2021-05-25T08:57:43Z) - l1-Norm Minimization with Regula Falsi Type Root Finding Methods [81.55197998207593]
We develop an efficient approach for non likelihoods using Regula Falsi root-finding techniques.
Practical performance is illustrated using l1-regularized classes t.
arXiv Detail & Related papers (2021-05-01T13:24:38Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.