Local Propagation in Constraint-based Neural Network
- URL: http://arxiv.org/abs/2002.07720v2
- Date: Fri, 17 Apr 2020 10:20:48 GMT
- Title: Local Propagation in Constraint-based Neural Network
- Authors: Giuseppe Marra, Matteo Tiezzi, Stefano Melacci, Alessandro Betti,
Marco Maggini, Marco Gori
- Abstract summary: We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
- Score: 77.37829055999238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we study a constraint-based representation of neural network
architectures. We cast the learning problem in the Lagrangian framework and we
investigate a simple optimization procedure that is well suited to fulfil the
so-called architectural constraints, learning from the available supervisions.
The computational structure of the proposed Local Propagation (LP) algorithm is
based on the search for saddle points in the adjoint space composed of weights,
neural outputs, and Lagrange multipliers. All the updates of the model
variables are locally performed, so that LP is fully parallelizable over the
neural units, circumventing the classic problem of gradient vanishing in deep
networks. The implementation of popular neural models is described in the
context of LP, together with those conditions that trace a natural connection
with Backpropagation. We also investigate the setting in which we tolerate
bounded violations of the architectural constraints, and we provide
experimental evidence that LP is a feasible approach to train shallow and deep
networks, opening the road to further investigations on more complex
architectures, easily describable by constraints.
Related papers
- Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Robust Stochastically-Descending Unrolled Networks [85.6993263983062]
Deep unrolling is an emerging learning-to-optimize method that unrolls a truncated iterative algorithm in the layers of a trainable neural network.
We show that convergence guarantees and generalizability of the unrolled networks are still open theoretical problems.
We numerically assess unrolled architectures trained under the proposed constraints in two different applications.
arXiv Detail & Related papers (2023-12-25T18:51:23Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - LSA-PINN: Linear Boundary Connectivity Loss for Solving PDEs on Complex
Geometry [15.583172926806148]
We present a novel loss formulation for efficient learning of complex dynamics from governing physics using physics-informed neural networks (PINNs)
In our experiments, existing versions of PINNs are seen to learn poorly in many problems, especially for complex geometries.
We propose a new Boundary Connectivity (BCXN) loss function which provides linear local structure approximation (LSA) to the gradient behaviors at the boundary for PINN.
arXiv Detail & Related papers (2023-02-03T03:26:08Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Deep Constraint-based Propagation in Graph Neural Networks [15.27048776159285]
We propose a novel approach to learning in Graph Neural Networks (GNNs) based on constrained optimization in the Lagrangian framework.
Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers.
An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.
arXiv Detail & Related papers (2020-05-05T16:50:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.