SaDe: Learning Models that Provably Satisfy Domain Constraints
- URL: http://arxiv.org/abs/2112.00552v1
- Date: Wed, 1 Dec 2021 15:18:03 GMT
- Title: SaDe: Learning Models that Provably Satisfy Domain Constraints
- Authors: Kshitij Goyal, Sebastijan Dumancic, Hendrik Blockeel
- Abstract summary: We present a machine learning approach that can handle a wide variety of constraints, and guarantee that these constraints will be satisfied by the model even on unseen data.
We cast machine learning as a maximum satisfiability problem, and solve it using a novel algorithm SaDe which combines constraint satisfaction with gradient descent.
- Score: 16.46852109556965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With increasing real world applications of machine learning, models are often
required to comply with certain domain based requirements, e.g., safety
guarantees in aircraft systems, legal constraints in a loan approval model. A
natural way to represent these properties is in the form of constraints.
Including such constraints in machine learning is typically done by the means
of regularization, which does not guarantee satisfaction of the constraints. In
this paper, we present a machine learning approach that can handle a wide
variety of constraints, and guarantee that these constraints will be satisfied
by the model even on unseen data. We cast machine learning as a maximum
satisfiability problem, and solve it using a novel algorithm SaDe which
combines constraint satisfaction with gradient descent. We demonstrate on three
use cases that this approach learns models that provably satisfy the given
constraints.
Related papers
- Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction [8.29487992932196]
We present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions.
Our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
arXiv Detail & Related papers (2023-03-02T10:40:50Z) - Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees [46.94549066382216]
We consider the task of training machine learning models with data-dependent constraints.
We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability.
arXiv Detail & Related papers (2023-01-15T21:41:40Z) - ROAD-R: The Autonomous Driving Dataset with Logical Requirements [54.608762221119406]
We introduce the ROad event Awareness dataset with logical Requirements (ROAD-R)
ROAD-R is the first publicly available dataset for autonomous driving with requirements expressed as logical constraints.
We show that it is possible to exploit them to create models that (i) have a better performance, and (ii) are guaranteed to be compliant with the requirements themselves.
arXiv Detail & Related papers (2022-10-04T13:22:19Z) - Constrained Machine Learning: The Bagel Framework [5.945320097465419]
Constrained machine learning problems are problems where learned models have to both be accurate and respect constraints.
The goal of this paper is to broaden the modeling capacity of constrained machine learning problems by incorporating existing work from optimization.
Because machine learning has specific requirements, we also propose an extended table constraint to split the space of hypotheses.
arXiv Detail & Related papers (2021-12-02T10:10:20Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Teaching the Old Dog New Tricks: Supervised Learning with Constraints [18.88930622054883]
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems.
Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output.
Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver.
arXiv Detail & Related papers (2020-02-25T09:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.