Accelerating Neural ODEs Using Model Order Reduction
- URL: http://arxiv.org/abs/2105.14070v1
- Date: Fri, 28 May 2021 19:27:09 GMT
- Title: Accelerating Neural ODEs Using Model Order Reduction
- Authors: Mikko Lehtim\"aki, Lassi Paunonen, Marja-Leena Linne
- Abstract summary: We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding nonlinear dynamical systems into artificial neural networks is a
powerful new formalism for machine learning. By parameterizing ordinary
differential equations (ODEs) as neural network layers, these Neural ODEs are
memory-efficient to train, process time-series naturally and incorporate
knowledge of physical systems into deep learning models. However, the practical
applications of Neural ODEs are limited due to long inference times, because
the outputs of the embedded ODE layers are computed numerically with
differential equation solvers that can be computationally demanding. Here we
show that mathematical model order reduction methods can be used for
compressing and accelerating Neural ODEs by accurately simulating the
continuous nonlinear dynamics in low-dimensional subspaces. We implement our
novel compression method by developing Neural ODEs that integrate the necessary
subspace-projection and interpolation operations as layers of the neural
network. We validate our model reduction approach by comparing it to two
established acceleration methods from the literature in two classification
asks. In compressing convolutional and recurrent Neural ODE architectures, we
achieve the best balance between speed and accuracy when compared to the other
two acceleration methods. Based on our results, our integration of model order
reduction with Neural ODEs can facilitate efficient, dynamical system-driven
deep learning in resource-constrained applications.
Related papers
- Neural Fractional Differential Equations [2.812395851874055]
Fractional Differential Equations (FDEs) are essential tools for modelling complex systems in science and engineering.
We propose the Neural FDE, a novel deep neural network architecture that adjusts a FDE to the dynamics of data.
arXiv Detail & Related papers (2024-03-05T07:45:29Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Correcting auto-differentiation in neural-ODE training [19.472357078065194]
We find that when a neural network employs high-order forms to approximate the underlying ODE flows, brute-force computation using auto-differentiation often produces non-converging artificial oscillations.
We propose a straightforward post-processing technique that effectively eliminates these oscillations, rectifies the computation and thus respects the updates of the underlying flow.
arXiv Detail & Related papers (2023-06-03T20:34:14Z) - Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - A memory-efficient neural ODE framework based on high-level adjoint
differentiation [4.063868707697316]
We present a new neural ODE framework, PNODE, based on high-level discrete algorithmic differentiation.
We show that PNODE achieves the highest memory efficiency when compared with other reverse-accurate methods.
arXiv Detail & Related papers (2022-06-02T20:46:26Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - Stiff Neural Ordinary Differential Equations [0.0]
We first show the challenges of learning neural ODE in the classical stiff ODE systems of Robertson's problem.
We then present successful demonstrations in stiff systems of Robertson's problem and an air pollution problem.
arXiv Detail & Related papers (2021-03-29T05:24:56Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.