Constrained Machine Learning: The Bagel Framework
- URL: http://arxiv.org/abs/2112.01088v1
- Date: Thu, 2 Dec 2021 10:10:20 GMT
- Title: Constrained Machine Learning: The Bagel Framework
- Authors: Guillaume Perez, Sebastian Ament, Carla Gomes, Arnaud Lallouet
- Abstract summary: Constrained machine learning problems are problems where learned models have to both be accurate and respect constraints.
The goal of this paper is to broaden the modeling capacity of constrained machine learning problems by incorporating existing work from optimization.
Because machine learning has specific requirements, we also propose an extended table constraint to split the space of hypotheses.
- Score: 5.945320097465419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are widely used for real-world applications, such as
document analysis and vision. Constrained machine learning problems are
problems where learned models have to both be accurate and respect constraints.
For continuous convex constraints, many works have been proposed, but learning
under combinatorial constraints is still a hard problem. The goal of this paper
is to broaden the modeling capacity of constrained machine learning problems by
incorporating existing work from combinatorial optimization. We propose first a
general framework called BaGeL (Branch, Generate and Learn) which applies
Branch and Bound to constrained learning problems where a learning problem is
generated and trained at each node until only valid models are obtained.
Because machine learning has specific requirements, we also propose an extended
table constraint to split the space of hypotheses. We validate the approach on
two examples: a linear regression under configuration constraints and a
non-negative matrix factorization with prior knowledge for latent semantics
analysis.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - A General Framework for Learning from Weak Supervision [93.89870459388185]
This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm.
Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources.
We also present an advanced algorithm that significantly simplifies the EM computational demands.
arXiv Detail & Related papers (2024-02-02T21:48:50Z) - Learning to Learn in Interactive Constraint Acquisition [7.741303298648302]
In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model.
In (inter)active CA, this is done by interactively posting queries to the user.
We propose to use probabilistic classification models to guide interactive CA to generate more promising queries.
arXiv Detail & Related papers (2023-12-17T19:12:33Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Efficient lifting of symmetry breaking constraints for complex
combinatorial problems [9.156939957189502]
This work extends the learning framework and implementation of a model-based approach for Answer Set Programming.
In particular, we incorporate a new conflict analysis algorithm in the Inductive Logic Programming system ILASP.
arXiv Detail & Related papers (2022-05-14T20:42:13Z) - SaDe: Learning Models that Provably Satisfy Domain Constraints [16.46852109556965]
We present a machine learning approach that can handle a wide variety of constraints, and guarantee that these constraints will be satisfied by the model even on unseen data.
We cast machine learning as a maximum satisfiability problem, and solve it using a novel algorithm SaDe which combines constraint satisfaction with gradient descent.
arXiv Detail & Related papers (2021-12-01T15:18:03Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Teaching the Old Dog New Tricks: Supervised Learning with Constraints [18.88930622054883]
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
arXiv Detail & Related papers (2020-02-25T09:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.