Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
- URL: http://arxiv.org/abs/2105.11763v1
- Date: Tue, 25 May 2021 08:57:43 GMT
- Title: Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
- Authors: Emilio Gamba, Bart Bogaerts and Tias Guns
- Abstract summary: We build on a recently proposed method for explaining solutions of constraint satisfaction problems.
An explanation here is a sequence of simple inference steps, where the simplicity of an inference step is measured by the number and types of constraints and facts used.
We tackle two emerging questions, namely how to generate explanations that are provably optimal and how to generate them efficiently.
- Score: 17.498283247757445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We build on a recently proposed method for explaining solutions of constraint
satisfaction problems. An explanation here is a sequence of simple inference
steps, where the simplicity of an inference step is measured by the number and
types of constraints and facts used, and where the sequence explains all
logical consequences of the problem. We build on these formal foundations and
tackle two emerging questions, namely how to generate explanations that are
provably optimal (with respect to the given cost metric) and how to generate
them efficiently. To answer these questions, we develop 1) an implicit hitting
set algorithm for finding optimal unsatisfiable subsets; 2) a method to reduce
multiple calls for (optimal) unsatisfiable subsets to a single call that takes
constraints on the subset into account, and 3) a method for re-using relevant
information over multiple calls to these algorithms. The method is also
applicable to other problems that require finding cost-optimal unsatiable
subsets. We specifically show that this approach can be used to effectively
find sequences of optimal explanation steps for constraint satisfaction
problems like logic grid puzzles.
Related papers
- Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
(extended algorithms and examples) [14.163888405810635]
We build on a recently proposed method for stepwise explaining solutions of Constraint Satisfaction Problems.
An explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function.
arXiv Detail & Related papers (2023-03-21T10:03:36Z) - A Sequential Deep Learning Algorithm for Sampled Mixed-integer
Optimisation Problems [0.3867363075280544]
We introduce and analyse two efficient algorithms for mixed-integer optimisation problems.
We show that both algorithms exhibit finite-time convergence towards the optimal solution.
We establish quantitatively the efficacy of these algorithms by means of three numerical tests.
arXiv Detail & Related papers (2023-01-25T17:10:52Z) - Efficient Neural Network Analysis with Sum-of-Infeasibilities [64.31536828511021]
Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
arXiv Detail & Related papers (2022-03-19T15:05:09Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Dynamic programming by polymorphic semiring algebraic shortcut fusion [1.9405875431318445]
Dynamic programming (DP) is an algorithmic design paradigm for the efficient, exact solution of intractable, problems.
This paper presents a rigorous algebraic formalism for systematically deriving DP algorithms, based on semiring.
arXiv Detail & Related papers (2021-07-05T00:51:02Z) - On Satisficing in Quantitative Games [30.53498001438171]
This work defines and investigates the satisficing problem on a two-player graph game with the discounted-sum cost model.
We show that while the satisficing problem can be solved using numerical methods just like the optimization problem, this approach does not render compelling benefits over optimization.
arXiv Detail & Related papers (2021-01-06T07:47:13Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.