Stable Neural Flows
- URL: http://arxiv.org/abs/2003.08063v1
- Date: Wed, 18 Mar 2020 06:27:21 GMT
- Title: Stable Neural Flows
- Authors: Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park,
Atsushi Yamashita, Hajime Asama
- Abstract summary: We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
The learning procedure is cast as an optimal control problem, and an approximate solution is proposed based on adjoint sensivity analysis.
- Score: 15.318500611972441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a provably stable variant of neural ordinary differential
equations (neural ODEs) whose trajectories evolve on an energy functional
parametrised by a neural network. Stable neural flows provide an implicit
guarantee on asymptotic stability of the depth-flows, leading to robustness
against input perturbations and low computational burden for the numerical
solver. The learning procedure is cast as an optimal control problem, and an
approximate solution is proposed based on adjoint sensivity analysis. We
further introduce novel regularizers designed to ease the optimization process
and speed up convergence. The proposed model class is evaluated on non-linear
classification and function approximation tasks.
Related papers
- Score-based Neural Ordinary Differential Equations for Computing Mean Field Control Problems [13.285775352653546]
This paper proposes a system of neural differential equations representing first- and second-order score functions along trajectories based on deep neural networks.
We reformulate the mean viscous field control (MFC) problem with individual noises into an unconstrained optimization problem framed by the proposed neural ODE system.
arXiv Detail & Related papers (2024-09-24T21:45:55Z) - On the Convergence Analysis of Over-Parameterized Variational Autoencoders: A Neural Tangent Kernel Perspective [7.580900499231056]
Variational Auto-Encoders (VAEs) have emerged as powerful probabilistic models for generative tasks.
This paper provides a mathematical proof of VAE under mild assumptions.
We also establish a novel connection between the optimization problem faced by over-Eized SNNs and the Kernel Ridge (KRR) problem.
arXiv Detail & Related papers (2024-09-09T06:10:31Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - q-RBFNN:A Quantum Calculus-based RBF Neural Network [31.14412266444568]
A gradient descent based learning approach for the radial basis function neural networks (RBFNN) is proposed.
The proposed method is based on the q-gradient which is also known as Jackson derivative.
The proposed $q$-RBFNN is analyzed for its convergence performance in the context of least square algorithm.
arXiv Detail & Related papers (2021-06-02T08:27:12Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - A Deterministic Approximation to Neural SDEs [38.23826389188657]
We show that obtaining well-calibrated uncertainty estimations from NSDEs is computationally prohibitive.
We develop a computationally affordable deterministic scheme which accurately approximates the transition kernel.
Our method also improves prediction accuracy thanks to the numerical stability of deterministic training.
arXiv Detail & Related papers (2020-06-16T08:00:26Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.