Training neural operators to preserve invariant measures of chaotic attractors
- URL: http://arxiv.org/abs/2306.01187v3
- Date: Tue, 16 Apr 2024 23:01:43 GMT
- Title: Training neural operators to preserve invariant measures of chaotic attractors
- Authors: Ruoxi Jiang, Peter Y. Lu, Elena Orlova, Rebecca Willett,
- Abstract summary: We show that a contrastive learning framework can preserve statistical properties of the dynamics nearly as well as the optimal transport approach.
Our method is shown empirically to preserve invariant measures of chaotic attractors.
- Score: 10.61157131995679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chaotic systems make long-horizon forecasts difficult because small perturbations in initial conditions cause trajectories to diverge at an exponential rate. In this setting, neural operators trained to minimize squared error losses, while capable of accurate short-term forecasts, often fail to reproduce statistical or structural properties of the dynamics over longer time horizons and can yield degenerate results. In this paper, we propose an alternative framework designed to preserve invariant measures of chaotic attractors that characterize the time-invariant statistical properties of the dynamics. Specifically, in the multi-environment setting (where each sample trajectory is governed by slightly different dynamics), we consider two novel approaches to training with noisy data. First, we propose a loss based on the optimal transport distance between the observed dynamics and the neural operator outputs. This approach requires expert knowledge of the underlying physics to determine what statistical features should be included in the optimal transport loss. Second, we show that a contrastive learning framework, which does not require any specialized prior knowledge, can preserve statistical properties of the dynamics nearly as well as the optimal transport approach. On a variety of chaotic systems, our method is shown empirically to preserve invariant measures of chaotic attractors.
Related papers
- Improved deep learning of chaotic dynamical systems with multistep penalty losses [0.0]
Predicting the long-term behavior of chaotic systems remains a formidable challenge.
This paper introduces a novel framework that addresses these challenges by leveraging the recently proposed multi-step penalty operators.
arXiv Detail & Related papers (2024-10-08T00:13:57Z) - Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - Neural Interaction Energy for Multi-Agent Trajectory Prediction [55.098754835213995]
We introduce a framework called Multi-Agent Trajectory prediction via neural interaction Energy (MATE)
MATE assesses the interactive motion of agents by employing neural interaction energy.
To bolster temporal stability, we introduce two constraints: inter-agent interaction constraint and intra-agent motion constraint.
arXiv Detail & Related papers (2024-04-25T12:47:47Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Tipping Point Forecasting in Non-Stationary Dynamics on Function Spaces [78.08947381962658]
Tipping points are abrupt, drastic, and often irreversible changes in the evolution of non-stationary dynamical systems.
We learn the evolution of such non-stationary systems using a novel recurrent neural operator (RNO), which learns mappings between function spaces.
We propose a conformal prediction framework to forecast tipping points by monitoring deviations from physics constraints.
arXiv Detail & Related papers (2023-08-17T05:42:27Z) - Probabilistic Trajectory Prediction with Structural Constraints [38.90152893402733]
This work addresses the problem of predicting the motion trajectories of dynamic objects in the environment.
Recent advances in predicting motion patterns often rely on machine learning techniques to extrapolate motion patterns from observed trajectories.
We propose a novel framework, which combines probabilistic learning and constrained trajectory optimisation.
arXiv Detail & Related papers (2021-07-09T03:48:14Z) - Reinforcement learning of rare diffusive dynamics [0.0]
We present a method to probe rare molecular dynamics trajectories directly using reinforcement learning.
We consider trajectories conditioned to transition between regions of configuration space in finite time, as well as trajectories exhibiting rare fluctuations of time-integrated quantities in the long time limit.
In both cases, reinforcement learning techniques are used to optimize an added force that minimizes the Kullback-Leibler divergence between the conditioned trajectory ensemble and a driven one.
arXiv Detail & Related papers (2021-05-10T13:00:15Z) - Short- and long-term prediction of a chaotic flow: A physics-constrained
reservoir computing approach [5.37133760455631]
We propose a physics-constrained machine learning method-based on reservoir computing- to time-accurately predict extreme events and long-term velocity statistics in a model of turbulent shear flow.
We show that the combination of the two approaches is able to accurately reproduce the velocity statistics and to predict the occurrence and amplitude of extreme events in a model of self-sustaining process in turbulence.
arXiv Detail & Related papers (2021-02-15T12:29:09Z) - Physics-aware, probabilistic model order reduction with guaranteed
stability [0.0]
We propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model.
We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics.
arXiv Detail & Related papers (2021-01-14T19:16:51Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.