TorchDyn: A Neural Differential Equations Library
- URL: http://arxiv.org/abs/2009.09346v1
- Date: Sun, 20 Sep 2020 03:45:49 GMT
- Title: TorchDyn: A Neural Differential Equations Library
- Authors: Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama,
Jinkyoo Park
- Abstract summary: We introduce TorchDyn, a PyTorch library dedicated to continuous-depth learning.
It is designed to elevate neural differential equations to be as accessible as regular plug-and-play deep learning primitives.
- Score: 16.43439140464003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous-depth learning has recently emerged as a novel perspective on deep
learning, improving performance in tasks related to dynamical systems and
density estimation. Core to these approaches is the neural differential
equation, whose forward passes are the solutions of an initial value problem
parametrized by a neural network. Unlocking the full potential of
continuous-depth models requires a different set of software tools, due to
peculiar differences compared to standard discrete neural networks, e.g
inference must be carried out via numerical solvers. We introduce TorchDyn, a
PyTorch library dedicated to continuous-depth learning, designed to elevate
neural differential equations to be as accessible as regular plug-and-play deep
learning primitives. This objective is achieved by identifying and subdividing
different variants into common essential components, which can be combined and
freely repurposed to obtain complex compositional architectures. TorchDyn
further offers step-by-step tutorials and benchmarks designed to guide
researchers and contributors.
Related papers
- Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Learning ODEs via Diffeomorphisms for Fast and Robust Integration [40.52862415144424]
Differentiable solvers are central for learning Neural ODEs.
We propose an alternative approach to learning ODEs from data.
We observe improvements of up to two orders of magnitude when integrating learned ODEs with gradient.
arXiv Detail & Related papers (2021-07-04T14:32:16Z) - Fully differentiable model discovery [0.0]
We propose an approach by combining neural network based surrogates with Sparse Bayesian Learning.
Our work expands PINNs to various types of neural network architectures, and connects neural network-based surrogates to the rich field of Bayesian parameter inference.
arXiv Detail & Related papers (2021-06-09T08:11:23Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Learning to Stop While Learning to Predict [85.7136203122784]
Many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs.
Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances.
In this paper, we tackle this varying depth problem using a steerable architecture.
We show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks.
arXiv Detail & Related papers (2020-06-09T07:22:01Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.