Sample-Specific Output Constraints for Neural Networks
- URL: http://arxiv.org/abs/2003.10258v1
- Date: Mon, 23 Mar 2020 13:13:11 GMT
- Title: Sample-Specific Output Constraints for Neural Networks
- Authors: Mathis Brosowsky (1 and 2), Olaf D\"unkel (1), Daniel Slieter (1),
Marius Z\"ollner (2) ((1) Dr. Ing. h.c. F. Porsche AG, (2) FZI Research
Center for Information Technology)
- Abstract summary: ConstraintNet is a neural network with the capability to constrain the output space in each forward pass via an additional input.
We focus on constraints in form of convex polytopes and show the generalization to further classes of constraints.
We demonstrate the application to a follow object controller for vehicles as a safety-critical application.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks reach state-of-the-art performance in a variety of learning
tasks. However, a lack of understanding the decision making process yields to
an appearance as black box. We address this and propose ConstraintNet, a neural
network with the capability to constrain the output space in each forward pass
via an additional input. The prediction of ConstraintNet is proven within the
specified domain. This enables ConstraintNet to exclude unintended or even
hazardous outputs explicitly whereas the final prediction is still learned from
data. We focus on constraints in form of convex polytopes and show the
generalization to further classes of constraints. ConstraintNet can be
constructed easily by modifying existing neural network architectures. We
highlight that ConstraintNet is end-to-end trainable with no overhead in the
forward and backward pass. For illustration purposes, we model ConstraintNet by
modifying a CNN and construct constraints for facial landmark prediction tasks.
Furthermore, we demonstrate the application to a follow object controller for
vehicles as a safety-critical application. We submitted an approach and system
for the generation of safety-critical outputs of an entity based on
ConstraintNet at the German Patent and Trademark Office with the official
registration mark DE10 2019 119 739.
Related papers
- Hard-Constrained Neural Networks with Universal Approximation Guarantees [5.3663546125491735]
HardNet is a framework for constructing neural networks that inherently satisfy hard constraints without sacrificing model capacity.
We show that HardNet retains the universal approximation capabilities of neural networks.
arXiv Detail & Related papers (2024-10-14T17:59:24Z) - A New Computationally Simple Approach for Implementing Neural Networks
with Output Hard Constraints [5.482532589225552]
A new method of imposing hard convex constraints on the neural network output values is proposed.
The mapping is implemented by the additional neural network layer with constraints for output.
The proposed method is simply extended to the case when constraints are imposed not only on the output vectors, but also on joint constraints depending on inputs.
arXiv Detail & Related papers (2023-07-19T21:06:43Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - DeepSaDe: Learning Neural Networks that Guarantee Domain Constraint
Satisfaction [8.29487992932196]
We present an approach to train neural networks which can enforce a wide variety of constraints and guarantee that the constraint is satisfied by all possible predictions.
Our approach is flexible enough to enforce a wide variety of domain constraints and is able to guarantee them in neural networks.
arXiv Detail & Related papers (2023-03-02T10:40:50Z) - Constrained Empirical Risk Minimization: Theory and Practice [2.4934936799100034]
We present a framework that allows the exact enforcement of constraints on parameterized sets of functions such as Deep Neural Networks (DNNs)
We focus on constraints that are outside the scope of equivariant networks used in Geometric Deep Learning.
As a major example of the framework, we restrict filters of a Convolutional Neural Network (CNN) to be wavelets, and apply these wavelet networks to the task of contour prediction in the medical domain.
arXiv Detail & Related papers (2023-02-09T16:11:58Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead
Heuristics [73.96837492216204]
We propose NeuroLogic A*esque, a decoding algorithm that incorporates estimates of future cost.
We develop efficient lookaheads that are efficient for large-scale language models.
Our approach achieves competitive baselines on five generation tasks, and new state-of-the-art performance on table-to-text generation, constrained machine translation, and keyword-constrained generation.
arXiv Detail & Related papers (2021-12-16T09:22:54Z) - Constrained Feedforward Neural Network Training via Reachability
Analysis [0.0]
It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
arXiv Detail & Related papers (2021-07-16T04:03:01Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Chance-Constrained Control with Lexicographic Deep Reinforcement
Learning [77.34726150561087]
This paper proposes a lexicographic Deep Reinforcement Learning (DeepRL)-based approach to chance-constrained Markov Decision Processes.
A lexicographic version of the well-known DeepRL algorithm DQN is also proposed and validated via simulations.
arXiv Detail & Related papers (2020-10-19T13:09:14Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.