A New Computationally Simple Approach for Implementing Neural Networks
with Output Hard Constraints
- URL: http://arxiv.org/abs/2307.10459v1
- Date: Wed, 19 Jul 2023 21:06:43 GMT
- Title: A New Computationally Simple Approach for Implementing Neural Networks
with Output Hard Constraints
- Authors: Andrei V. Konstantinov and Lev V. Utkin
- Abstract summary: A new method of imposing hard convex constraints on the neural network output values is proposed.
The mapping is implemented by the additional neural network layer with constraints for output.
The proposed method is simply extended to the case when constraints are imposed not only on the output vectors, but also on joint constraints depending on inputs.
- Score: 5.482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new computationally simple method of imposing hard convex constraints on
the neural network output values is proposed. The key idea behind the method is
to map a vector of hidden parameters of the network to a point that is
guaranteed to be inside the feasible set defined by a set of constraints. The
mapping is implemented by the additional neural network layer with constraints
for output. The proposed method is simply extended to the case when constraints
are imposed not only on the output vectors, but also on joint constraints
depending on inputs. The projection approach to imposing constraints on outputs
can simply be implemented in the framework of the proposed method. It is shown
how to incorporate different types of constraints into the proposed method,
including linear and quadratic constraints, equality constraints, and dynamic
constraints, constraints in the form of boundaries. An important feature of the
method is its computational simplicity. Complexities of the forward pass of the
proposed neural network layer by linear and quadratic constraints are O(n*m)
and O(n^2*m), respectively, where n is the number of variables, m is the number
of constraints. Numerical experiments illustrate the method by solving
optimization and classification problems. The code implementing the method is
publicly available.
Related papers
- Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Constrained Feedforward Neural Network Training via Reachability
Analysis [0.0]
It remains an open challenge to train a neural network to obey safety constraints.
This work proposes a constrained method to simultaneously train and verify a feedforward neural network with rectified linear unit (ReLU) nonlinearities.
arXiv Detail & Related papers (2021-07-16T04:03:01Z) - Efficient methods for Gaussian Markov random fields under sparse linear
constraints [2.741266294612776]
Methods for inference and simulation of linearly constrained Gaussian Markov Random Fields (GMRF) are computationally prohibitive when the number of constraints is large.
We propose a new class of methods to overcome these challenges in the common case of sparse constraints.
arXiv Detail & Related papers (2021-06-03T09:31:12Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Sample-Specific Output Constraints for Neural Networks [0.0]
ConstraintNet is a neural network with the capability to constrain the output space in each forward pass via an additional input.
We focus on constraints in form of convex polytopes and show the generalization to further classes of constraints.
We demonstrate the application to a follow object controller for vehicles as a safety-critical application.
arXiv Detail & Related papers (2020-03-23T13:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.