Extracting Nonlinear Dynamical Response Functions from Time Evolution
- URL: http://arxiv.org/abs/2507.07679v1
- Date: Thu, 10 Jul 2025 12:00:00 GMT
- Title: Extracting Nonlinear Dynamical Response Functions from Time Evolution
- Authors: Atsushi Ono,
- Abstract summary: We develop a general framework based on the functional derivative to extract nonlinear dynamical response functions from the temporal evolution of physical quantities.<n>We validate our approach by calculating the second- and third-order optical responses in the Rice-Mele model.<n>This framework is broadly applicable to any method that can compute real-time dynamics, offering a powerful and versatile tool for investigating nonlinear responses in dynamical systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a general framework based on the functional derivative to extract nonlinear dynamical response functions from the temporal evolution of physical quantities, without explicitly computing multipoint correlation functions. We validate our approach by calculating the second- and third-order optical responses in the Rice-Mele model and further apply it to a many-body interacting system using a tensor network method. This framework is broadly applicable to any method that can compute real-time dynamics, offering a powerful and versatile tool for investigating nonlinear responses in dynamical systems.
Related papers
- Learning Chaotic Dynamics with Neuromorphic Network Dynamics [0.0]
This study investigates how dynamical systems may be learned and modelled with a neuromorphic network which is itself a dynamical system.<n>The neuromorphic network used in this study is based on a complex electrical circuit comprised of memristive elements that produce neuro-synaptic nonlinear responses to input electrical signals.
arXiv Detail & Related papers (2025-06-12T14:50:55Z) - Uncovering the Functional Roles of Nonlinearity in Memory [2.315156126698557]
We go beyond performance comparisons to systematically dissect the functional role of nonlinearity in recurrent networks.<n>We use Almost Linear Recurrent Neural Networks (AL-RNNs), which allow fine-grained control over nonlinearity.<n>We find that minimal nonlinearity is not only sufficient but often optimal, yielding models that are simpler, more robust, and more interpretable than their fully nonlinear or linear counterparts.
arXiv Detail & Related papers (2025-06-09T16:32:19Z) - Generative System Dynamics in Recurrent Neural Networks [56.958984970518564]
We investigate the continuous time dynamics of Recurrent Neural Networks (RNNs)<n>We show that skew-symmetric weight matrices are fundamental to enable stable limit cycles in both linear and nonlinear configurations.<n> Numerical simulations showcase how nonlinear activation functions not only maintain limit cycles, but also enhance the numerical stability of the system integration process.
arXiv Detail & Related papers (2025-04-16T10:39:43Z) - Learnable Infinite Taylor Gaussian for Dynamic View Rendering [55.382017409903305]
This paper introduces a novel approach based on a learnable Taylor Formula to model the temporal evolution of Gaussians.<n>The proposed method achieves state-of-the-art performance in this domain.
arXiv Detail & Related papers (2024-12-05T16:03:37Z) - Equivariant Graph Neural Operator for Modeling 3D Dynamics [148.98826858078556]
We propose Equivariant Graph Neural Operator (EGNO) to directly models dynamics as trajectories instead of just next-step prediction.
EGNO explicitly learns the temporal evolution of 3D dynamics where we formulate the dynamics as a function over time and learn neural operators to approximate it.
Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods.
arXiv Detail & Related papers (2024-01-19T21:50:32Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Decomposed Linear Dynamical Systems (dLDS) for learning the latent
components of neural dynamics [6.829711787905569]
We propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data.
Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time.
In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system.
arXiv Detail & Related papers (2022-06-07T02:25:38Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Neural Dynamic Mode Decomposition for End-to-End Modeling of Nonlinear
Dynamics [49.41640137945938]
We propose a neural dynamic mode decomposition for estimating a lift function based on neural networks.
With our proposed method, the forecast error is backpropagated through the neural networks and the spectral decomposition.
Our experiments demonstrate the effectiveness of our proposed method in terms of eigenvalue estimation and forecast performance.
arXiv Detail & Related papers (2020-12-11T08:34:26Z) - Spatio-Temporal Activation Function To Map Complex Dynamical Systems [0.0]
Reservoir computing, which is a subset of recurrent neural networks, is actively used to simulate complex dynamical systems.
The inclusion of a temporal term alters the fundamental nature of an activation function, it provides capability to capture the complex dynamics of time series data.
arXiv Detail & Related papers (2020-09-06T23:08:25Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.