Generalizing Constraint Models in Constraint Acquisition
- URL: http://arxiv.org/abs/2412.14950v1
- Date: Thu, 19 Dec 2024 15:31:29 GMT
- Title: Generalizing Constraint Models in Constraint Acquisition
- Authors: Dimos Tsouros, Senne Berden, Steven Prestwich, Tias Guns,
- Abstract summary: Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process.
Most CA methods suffer from a significant drawback: they learn a single set of individual constraints for a specific problem instance, but cannot generalize these constraints to the parameterized constraint specifications of the problem.
We propose GenCon, a novel approach to learn parameterized constraint models capable of modeling varying instances of the same problem.
- Score: 6.305123652677644
- License:
- Abstract: Constraint Acquisition (CA) aims to widen the use of constraint programming by assisting users in the modeling process. However, most CA methods suffer from a significant drawback: they learn a single set of individual constraints for a specific problem instance, but cannot generalize these constraints to the parameterized constraint specifications of the problem. In this paper, we address this limitation by proposing GenCon, a novel approach to learn parameterized constraint models capable of modeling varying instances of the same problem. To achieve this generalization, we make use of statistical learning techniques at the level of individual constraints. Specifically, we propose to train a classifier to predict, for any possible constraint and parameterization, whether the constraint belongs to the problem. We then show how, for some classes of classifiers, we can extract decision rules to construct interpretable constraint specifications. This enables the generation of ground constraints for any parameter instantiation. Additionally, we present a generate-and-test approach that can be used with any classifier, to generate the ground constraints on the fly. Our empirical results demonstrate that our approach achieves high accuracy and is robust to noise in the input instances.
Related papers
- You-Only-Randomize-Once: Shaping Statistical Properties in Constraint-based PCG [3.581471126368696]
We introduce You-Only-Randomize-Once (YORO) pre-rolling, a method for crafting a decision variable ordering for a constraint solver.
We show that this technique effectively controls the statistics of tile-grid outputs generated by several off-the-shelf SAT solvers.
arXiv Detail & Related papers (2024-09-01T20:43:55Z) - Online Constraint Tightening in Stochastic Model Predictive Control: A
Regression Approach [49.056933332667114]
No analytical solutions exist for chance-constrained optimal control problems.
We propose a data-driven approach for learning the constraint-tightening parameters online during control.
Our approach yields constraint-tightening parameters that tightly satisfy the chance constraints.
arXiv Detail & Related papers (2023-10-04T16:22:02Z) - Using Integer Constraint Solving in Reuse Based Requirements Engineering [0.0]
Product Lines (PL) have proved an effective approach to reuse-based configurations.
It is now widely acknowledged that a product can be considered as a constraint satisfaction problem.
It is natural to consider constraint programming as a first choice candidate to specify constraints on PL.
This paper proposes to further explore the use of integer constraint programming to specify PL constraints.
arXiv Detail & Related papers (2023-09-28T09:20:07Z) - Toward Unified Controllable Text Generation via Regular Expression
Instruction [56.68753672187368]
Our paper introduces Regular Expression Instruction (REI), which utilizes an instruction-based mechanism to fully exploit regular expressions' advantages to uniformly model diverse constraints.
Our method only requires fine-tuning on medium-scale language models or few-shot, in-context learning on large language models, and requires no further adjustment when applied to various constraint combinations.
arXiv Detail & Related papers (2023-09-19T09:05:14Z) - Compositional Diffusion-Based Continuous Constraint Solvers [98.1702285470628]
This paper introduces an approach for learning to solve continuous constraint satisfaction problems (CCSP) in robotic reasoning and planning.
By contrast, our model, the compositional diffusion continuous constraint solver (Diffusion-CCSP), derives global solutions to CCSPs.
arXiv Detail & Related papers (2023-09-02T15:20:36Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Constrained Machine Learning: The Bagel Framework [5.945320097465419]
Constrained machine learning problems are problems where learned models have to both be accurate and respect constraints.
The goal of this paper is to broaden the modeling capacity of constrained machine learning problems by incorporating existing work from optimization.
Because machine learning has specific requirements, we also propose an extended table constraint to split the space of hypotheses.
arXiv Detail & Related papers (2021-12-02T10:10:20Z) - SaDe: Learning Models that Provably Satisfy Domain Constraints [16.46852109556965]
We present a machine learning approach that can handle a wide variety of constraints, and guarantee that these constraints will be satisfied by the model even on unseen data.
We cast machine learning as a maximum satisfiability problem, and solve it using a novel algorithm SaDe which combines constraint satisfaction with gradient descent.
arXiv Detail & Related papers (2021-12-01T15:18:03Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Teaching the Old Dog New Tricks: Supervised Learning with Constraints [18.88930622054883]
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
arXiv Detail & Related papers (2020-02-25T09:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.