Upper Approximation Bounds for Neural Oscillators
- URL: http://arxiv.org/abs/2512.01015v1
- Date: Sun, 30 Nov 2025 18:20:40 GMT
- Title: Upper Approximation Bounds for Neural Oscillators
- Authors: Zifeng Huang, Konstantin M. Zuev, Yong Xia, Michael Beer,
- Abstract summary: Theory of quantifying the capacities of neural network architectures remains a significant challenge.<n>This study considers the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron.<n>Results provide a robust theoretical foundation for the effective application of the neural oscillator in science and engineering.
- Score: 8.075776288865907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural oscillators, originating from the second-order ordinary differential equations (ODEs), have demonstrated competitive performance in stably learning causal mappings between long-term sequences or continuous temporal functions. However, theoretically quantifying the capacities of their neural network architectures remains a significant challenge. In this study, the neural oscillator consisting of a second-order ODE followed by a multilayer perceptron (MLP) is considered. Its upper approximation bound for approximating causal and uniformly continuous operators between continuous temporal function spaces and that for approximating uniformly asymptotically incrementally stable second-order dynamical systems are derived. The established proof method of the approximation bound for approximating the causal continuous operators can also be directly applied to state-space models consisting of a linear time-continuous complex recurrent neural network followed by an MLP. Theoretical results reveal that the approximation error of the neural oscillator for approximating the second-order dynamical systems scales polynomially with the reciprocals of the widths of two utilized MLPs, thus mitigating the curse of parametric complexity. The decay rates of two established approximation error bounds are validated through two numerical cases. These results provide a robust theoretical foundation for the effective application of the neural oscillator in science and engineering.
Related papers
- Capturing reduced-order quantum many-body dynamics out of equilibrium via neural ordinary differential equations [0.0]
We show that a neural ODE model trained on exact 2RDM data can reproduce its dynamics without any explicit three-particle information.<n>The magnitude of the time-averaged three-particle-correlation buildup appears to be the primary predictor of success.
arXiv Detail & Related papers (2025-12-15T21:48:10Z) - Delay-adaptive Control of Nonlinear Systems with Approximate Neural Operator Predictors [6.093618731228799]
We propose a rigorous method for implementing predictor feedback controllers in nonlinear systems with unknown and arbitrarily long actuator delays.<n>To address the analytically intractable nature of the predictor, we approximate it using a learned neural operator mapping.<n>We provide a theoretical stability analysis based on the universal approximation theorem of neural operators and the transport partial differential equation (PDE) representation of the delay.<n>We then prove, via a Lyapunov-Krasovskii functional, semi-global practical convergence of the dynamical system dependent on the approximation error of the predictor and delay bounds.
arXiv Detail & Related papers (2025-08-28T02:30:53Z) - Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Two-time second-order correlation function [0.0]
Derivation of two-time second-order correlation function by following approaches such as differential equation, coherent-state propagator, and quasi-statistical distribution function is presented.
arXiv Detail & Related papers (2024-06-15T07:59:39Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - Non-perturbative analytical diagonalization of Hamiltonians with
application to coupling suppression and enhancement in cQED [0.0]
Deriving effective Hamiltonian models plays an essential role in quantum theory.
We present two symbolic methods for computing effective Hamiltonian models.
We study the ZZ and cross-resonance interactions of superconducting qubits systems.
arXiv Detail & Related papers (2021-11-30T19:01:44Z) - Consistency of mechanistic causal discovery in continuous-time using
Neural ODEs [85.7910042199734]
We consider causal discovery in continuous-time for the study of dynamical systems.
We propose a causal discovery algorithm based on penalized Neural ODEs.
arXiv Detail & Related papers (2021-05-06T08:48:02Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.