Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
(extended algorithms and examples)
- URL: http://arxiv.org/abs/2303.11712v3
- Date: Tue, 28 Nov 2023 13:07:47 GMT
- Title: Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
(extended algorithms and examples)
- Authors: Emilio Gamba, Bart Bogaerts, Tias Guns
- Abstract summary: We build on a recently proposed method for stepwise explaining solutions of Constraint Satisfaction Problems.
An explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function.
- Score: 14.163888405810635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We build on a recently proposed method for stepwise explaining solutions of
Constraint Satisfaction Problems (CSP) in a human-understandable way. An
explanation here is a sequence of simple inference steps where simplicity is
quantified using a cost function. The algorithms for explanation generation
rely on extracting Minimal Unsatisfiable Subsets (MUS) of a derived
unsatisfiable formula, exploiting a one-to-one correspondence between so-called
non-redundant explanations and MUSs. However, MUS extraction algorithms do not
provide any guarantee of subset minimality or optimality with respect to a
given cost function. Therefore, we build on these formal foundations and tackle
the main points of improvement, namely how to generate explanations efficiently
that are provably optimal (with respect to the given cost metric). For that, we
developed (1) a hitting set-based algorithm for finding the optimal constrained
unsatisfiable subsets; (2) a method for re-using relevant information over
multiple algorithm calls; and (3) methods exploiting domain-specific
information to speed up the explanation sequence generation. We experimentally
validated our algorithms on a large number of CSP problems. We found that our
algorithms outperform the MUS approach in terms of explanation quality and
computational time (on average up to 56 % faster than a standard MUS approach).
Related papers
- Coherent Local Explanations for Mathematical Optimization [0.0]
We introduce Coherent Local Explanations for Mathematical Optimization (CLEMO)
CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure.
Our sampling-based procedure can provide explanations for the behavior of exact and exact solution algorithms.
arXiv Detail & Related papers (2025-02-07T11:18:04Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - OptABC: an Optimal Hyperparameter Tuning Approach for Machine Learning
Algorithms [1.6114012813668934]
OptABC is proposed to help ABC algorithm in faster convergence toward a near-optimum solution.
OptABC integrates artificial bee colony algorithm, K-Means clustering, greedy algorithm, and opposition-based learning strategy.
Experimental results demonstrate the effectiveness of OptABC compared to existing approaches in the literature.
arXiv Detail & Related papers (2021-12-15T22:33:39Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization [17.498283247757445]
We build on a recently proposed method for explaining solutions of constraint satisfaction problems.
An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used.
We tackle two emerging questions, namely how to generate explanations that are provably optimal and how to generate them efficiently.
arXiv Detail & Related papers (2021-05-25T08:57:43Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.