DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction
- URL: http://arxiv.org/abs/2303.01141v3
- Date: Thu, 14 Dec 2023 12:57:59 GMT
- Title: DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction
- Authors: Kshitij Goyal, Sebastijan Dumancic, Hendrik Blockeel
- Abstract summary: We present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions.
Our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
- Score: 8.29487992932196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning models, specifically neural networks, are becoming
increasingly popular, there are concerns regarding their trustworthiness,
specially in safety-critical applications, e.g. actions of an autonomous
vehicle must be safe. There are approaches that can train neural networks where
such domain requirements are enforced as constraints, but they either cannot
guarantee that the constraint will be satisfied by all possible predictions
(even on unseen data) or they are limited in the type of constraints that can
be enforced. In this paper, we present an approach to train neural networks
which can enforce a wide variety of constraints and guarantee that the
constraint is satisfied by all possible predictions. The approach builds on
earlier work where learning linear models is formulated as a constraint
satisfaction problem (CSP). To make this idea applicable to neural networks,
two crucial new elements are added: constraint propagation over the network
layers, and weight updates based on a mix of gradient descent and CSP solving.
Evaluation on various machine learning tasks demonstrates that our approach is
flexible enough to enforce a wide variety of domain constraints and is able to
guarantee them in neural networks.
Related papers
- Hard-Constrained Neural Networks with Universal Approximation Guarantees [5.3663546125491735]
HardNet is a framework for constructing neural networks that inherently satisfy hard constraints without sacrificing model capacity.
We show that HardNet retains the universal approximation capabilities of neural networks.
arXiv Detail & Related papers (2024-10-14T17:59:24Z) - Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization [53.15874572081944]
We study computability in the deep learning framework from two perspectives.
We show algorithmic limitations in training deep neural networks even in cases where the underlying problem is well-behaved.
Finally, we show that in quantized versions of classification and deep network training, computability restrictions do not arise or can be overcome to a certain degree.
arXiv Detail & Related papers (2024-08-12T15:02:26Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv Detail & Related papers (2023-09-13T16:33:27Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Smoothness and monotonicity constraints for neural networks using ICEnet [0.0]
We present a novel method for enforcing constraints within deep neural network models.
We show how these models can be trained and provide example applications using real-world datasets.
arXiv Detail & Related papers (2023-05-15T17:14:52Z) - Constrained Empirical Risk Minimization: Theory and Practice [2.4934936799100034]
We present a framework that allows the exact enforcement of constraints on parameterized sets of functions such as Deep Neural Networks (DNNs)
We focus on constraints that are outside the scope of equivariant networks used in Geometric Deep Learning.
As a major example of the framework, we restrict filters of a Convolutional Neural Network (CNN) to be wavelets, and apply these wavelet networks to the task of contour prediction in the medical domain.
arXiv Detail & Related papers (2023-02-09T16:11:58Z) - Neural network training under semidefinite constraints [0.0]
This paper is concerned with the training of neural networks (NNs) under semidefinite constraints.
Semidefinite constraints can be used to verify interesting properties for NNs.
In experiments, we demonstrate the superior efficiency of our training method over previous approaches.
arXiv Detail & Related papers (2022-01-03T13:10:49Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - Constrained Feedforward Neural Network Training via Reachability
Analysis [0.0]
It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
arXiv Detail & Related papers (2021-07-16T04:03:01Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.