A framework for step-wise explaining how to solve constraint
satisfaction problems
- URL: http://arxiv.org/abs/2006.06343v1
- Date: Thu, 11 Jun 2020 11:35:41 GMT
- Title: A framework for step-wise explaining how to solve constraint
satisfaction problems
- Authors: Bart Bogaerts, Emilio Gamba, Tias Guns
- Abstract summary: We study the problem of explaining the inference steps that one can take during propagation, in a way that is easy to interpret for a person.
Thereby, we aim to give the constraint solver explainable agency, which can help in building trust in the solver.
- Score: 21.96171133035504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the problem of step-wise explaining how to solve constraint
satisfaction problems, with a use case on logic grid puzzles. More
specifically, we study the problem of explaining the inference steps that one
can take during propagation, in a way that is easy to interpret for a person.
Thereby, we aim to give the constraint solver explainable agency, which can
help in building trust in the solver by being able to understand and even learn
from the explanations. The main challenge is that of finding a sequence of
simple explanations, where each explanation should aim to be as cognitively
easy as possible for a human to verify and understand. This contrasts with the
arbitrary combination of facts and constraints that the solver may use when
propagating. We propose the use of a cost function to quantify how simple an
individual explanation of an inference step is, and identify the
explanation-production problem of finding the best sequence of explanations of
a CSP. Our approach is agnostic of the underlying constraint propagation
mechanisms, and can provide explanations even for inference steps resulting
from combinations of constraints. In case multiple constraints are involved, we
also develop a mechanism that allows to break the most difficult steps up and
thus gives the user the ability to zoom in on specific parts of the
explanation. Our proposed algorithm iteratively constructs the explanation
sequence by using an optimistic estimate of the cost function to guide the
search for the best explanation at each step. Our experiments on logic grid
puzzles show the feasibility of the approach in terms of the quality of the
individual explanations and the resulting explanation sequences obtained.
Related papers
- Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Distilling Reasoning Ability from Large Language Models with Adaptive Thinking [54.047761094420174]
Chain of thought finetuning (cot-finetuning) aims to endow small language models (SLM) with reasoning ability to improve their performance towards specific tasks.
Most existing cot-finetuning methods adopt a pre-thinking mechanism, allowing the SLM to generate a rationale before providing an answer.
This mechanism enables SLM to analyze and think about complex questions, but it also makes answer correctness highly sensitive to minor errors in rationale.
We propose a robust post-thinking mechanism to generate answers before rationale.
arXiv Detail & Related papers (2024-04-14T07:19:27Z) - From Robustness to Explainability and Back Again [3.7950144463212134]
This paper addresses the poor scalability of formal explainability and proposes novel efficient algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of queries, and such robustness that the number of such queries is at most linear on the number of features.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles [50.81061839052459]
We formalize the generation of robust counterfactual explanations as a probabilistic problem.
We show the link between the robustness of ensemble models and the robustness of base learners.
Our method achieves high robustness with only a small increase in the distance from counterfactual explanations to their initial observations.
arXiv Detail & Related papers (2022-05-27T17:28:54Z) - Finding Counterfactual Explanations through Constraint Relaxations [6.961253535504979]
Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints.
A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system.
We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems.
arXiv Detail & Related papers (2022-04-07T13:18:54Z) - Counterfactual Explanations in Sequential Decision Making Under
Uncertainty [27.763369810430653]
We develop methods to find counterfactual explanations for sequential decision making processes.
In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions.
We show that our algorithm finds can provide valuable insights to enhance decision making under uncertainty.
arXiv Detail & Related papers (2021-07-06T17:38:19Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization [17.498283247757445]
We build on a recently proposed method for explaining solutions of constraint satisfaction problems.
An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used.
We tackle two emerging questions, namely how to generate explanations that are provably optimal and how to generate them efficiently.
arXiv Detail & Related papers (2021-05-25T08:57:43Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.