Go with the Flow: Adaptive Control for Neural ODEs
- URL: http://arxiv.org/abs/2006.09545v3
- Date: Thu, 15 Apr 2021 10:03:12 GMT
- Title: Go with the Flow: Adaptive Control for Neural ODEs
- Authors: Mathieu Chalvidal, Matthew Ricci, Rufin VanRullen, Thomas Serre
- Abstract summary: We describe a new module called neurally controlled ODE (N-CODE) designed to improve the expressivity of NODEs.
N-CODE modules are dynamic variables governed by a trainable map from initial or current activation state.
A single module is sufficient for learning a distribution on non-autonomous flows that adaptively drive neural representations.
- Score: 10.265713480189484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their elegant formulation and lightweight memory cost, neural
ordinary differential equations (NODEs) suffer from known representational
limitations. In particular, the single flow learned by NODEs cannot express all
homeomorphisms from a given data space to itself, and their static weight
parameterization restricts the type of functions they can learn compared to
discrete architectures with layer-dependent weights. Here, we describe a new
module called neurally controlled ODE (N-CODE) designed to improve the
expressivity of NODEs. The parameters of N-CODE modules are dynamic variables
governed by a trainable map from initial or current activation state, resulting
in forms of open-loop and closed-loop control, respectively. A single module is
sufficient for learning a distribution on non-autonomous flows that adaptively
drive neural representations. We provide theoretical and empirical evidence
that N-CODE circumvents limitations of previous NODEs models and show how
increased model expressivity manifests in several supervised and unsupervised
learning problems. These favorable empirical results indicate the potential of
using data- and activity-dependent plasticity in neural networks across
numerous domains.
Related papers
- Projected Neural Differential Equations for Learning Constrained Dynamics [3.570367665112327]
We introduce a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold.
PNDEs outperform existing methods while requiring fewer hyper parameters.
The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems.
arXiv Detail & Related papers (2024-10-31T06:32:43Z) - On Tuning Neural ODE for Stability, Consistency and Faster Convergence [0.0]
We propose a first-order Nesterov's accelerated gradient (NAG) based ODE-solver which is proven to be tuned vis-a-vis CCS conditions.
We empirically demonstrate the efficacy of our approach by training faster, while achieving better or comparable performance against neural-ode.
arXiv Detail & Related papers (2023-12-04T06:18:10Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Generalized Neural Closure Models with Interpretability [28.269731698116257]
We develop a novel and versatile methodology of unified neural partial delay differential equations.
We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations.
We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models.
arXiv Detail & Related papers (2023-01-15T21:57:43Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Neural Flows: Efficient Alternative to Neural ODEs [8.01886971335823]
We propose an alternative by directly modeling the solution curves - the flow of an ODE - with a neural network.
This immediately eliminates the need for expensive numerical solvers while still maintaining the modeling capability of neural ODEs.
arXiv Detail & Related papers (2021-10-25T15:24:45Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Neural ODE Processes [64.10282200111983]
We introduce Neural ODE Processes (NDPs), a new class of processes determined by a distribution over Neural ODEs.
We show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points.
arXiv Detail & Related papers (2021-03-23T09:32:06Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.