Virtual Analog Modeling of Distortion Circuits Using Neural Ordinary
Differential Equations
- URL: http://arxiv.org/abs/2205.01897v1
- Date: Wed, 4 May 2022 05:19:46 GMT
- Title: Virtual Analog Modeling of Distortion Circuits Using Neural Ordinary
Differential Equations
- Authors: Jan Wilczek, Alec Wright, Vesa V\"alim\"aki, Emanu\"el Habets
- Abstract summary: Recent research in deep learning has shown that neural networks can learn differential equations governing dynamical systems.
In this paper, we adapt this concept to Virtual Analog (VA) modeling to learn the ordinary differential equations (ODEs) governing the first-order and the second-order diode clipper.
The proposed models achieve performance comparable to state-of-the-art recurrent neural networks (RNNs) albeit using fewer parameters.
- Score: 1.8352113484137629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research in deep learning has shown that neural networks can learn
differential equations governing dynamical systems. In this paper, we adapt
this concept to Virtual Analog (VA) modeling to learn the ordinary differential
equations (ODEs) governing the first-order and the second-order diode clipper.
The proposed models achieve performance comparable to state-of-the-art
recurrent neural networks (RNNs) albeit using fewer parameters. We show that
this approach does not require oversampling and allows to increase the sampling
rate after the training has completed, which results in increased accuracy.
Using a sophisticated numerical solver allows to increase the accuracy at the
cost of slower processing. ODEs learned this way do not require closed forms
but are still physically interpretable.
Related papers
- Balanced Neural ODEs: nonlinear model order reduction and Koopman operator approximations [0.0]
Variational Autoencoders (VAEs) are a powerful framework for learning compact latent representations.
NeuralODEs excel in learning transient system dynamics.
This work combines the strengths of both to create fast surrogate models with adjustable complexity.
arXiv Detail & Related papers (2024-10-14T05:45:52Z) - Correcting auto-differentiation in neural-ODE training [19.472357078065194]
We find that when a neural network employs high-order forms to approximate the underlying ODE flows, brute-force computation using auto-differentiation often produces non-converging artificial oscillations.
We propose a straightforward post-processing technique that effectively eliminates these oscillations, rectifies the computation and thus respects the updates of the underlying flow.
arXiv Detail & Related papers (2023-06-03T20:34:14Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Neural ODEs with Irregular and Noisy Data [8.349349605334316]
We discuss a methodology to learn differential equation(s) using noisy and irregular sampled measurements.
In our methodology, the main innovation can be seen in the integration of deep neural networks with the neural ordinary differential equations (ODEs) approach.
The proposed framework to learn a model describing the vector field is highly effective under noisy measurements.
arXiv Detail & Related papers (2022-05-19T11:24:41Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.