Adaptive recurrent flow map operator learning for reaction diffusion dynamics
- URL: http://arxiv.org/abs/2602.09487v1
- Date: Tue, 10 Feb 2026 07:33:13 GMT
- Title: Adaptive recurrent flow map operator learning for reaction diffusion dynamics
- Authors: Huseyin Tunc,
- Abstract summary: We develop an operator learner with adaptive recurrent training (DDOL-ART) using a robust recurrent strategy with lightweight validation milestones.<n>DDOL-ART learns one-step operators that remain stable under long rollouts and generalize zero-shot to strong shifts.<n>It is several-fold faster than a physics-based numerical-loss operator learner (NLOL) under matched settings.
- Score: 0.9137554315375919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reaction-diffusion (RD) equations underpin pattern formation across chemistry, biology, and physics, yet learning stable operators that forecast their long-term dynamics from data remains challenging. Neural-operator surrogates provide resolution-robust prediction, but autoregressive rollouts can drift due to the accumulation of error, and out-of-distribution (OOD) initial conditions often degrade accuracy. Physics-based numerical residual objectives can regularize operator learning, although they introduce additional assumptions, sensitivity to discretization and loss design, and higher training cost. Here we develop a purely data-driven operator learner with adaptive recurrent training (DDOL-ART) using a robust recurrent strategy with lightweight validation milestones that early-exit unproductive rollout segments and redirect optimization. Trained only on a single in-distribution toroidal Gaussian family over short horizons, DDOL-ART learns one-step operators that remain stable under long rollouts and generalize zero-shot to strong morphology shifts across FitzHugh-Nagumo (FN), Gray-Scott (GS), and Lambda-Omega (LO) systems. Across these benchmarks, DDOL-ART delivers a strong accuracy and cost trade-off. It is several-fold faster than a physics-based numerical-loss operator learner (NLOL) under matched settings, and it remains competitive on both in-distribution stability and OOD robustness. Training-dynamics diagnostics show that adaptivity strengthens the correlation between validation error and OOD test error performance, acting as a feedback controller that limits optimization drift. Our results indicate that feedback-controlled recurrent training of DDOL-ART generates robust flow-map surrogates without PDE residuals, while simultaneously maintaining competitiveness with NLOL at significantly reduced training costs.
Related papers
- When Learning Hurts: Fixed-Pole RNN for Real-Time Online Training [58.25341036646294]
We analytically examine why learning recurrent poles does not provide tangible benefits in data and empirically offer real-time learning scenarios.<n>We show that fixed-pole networks achieve superior performance with lower training complexity, making them more suitable for online real-time tasks.
arXiv Detail & Related papers (2026-02-25T00:15:13Z) - Physics-Informed Laplace Neural Operator for Solving Partial Differential Equations [11.064132774859553]
Physics-Informed Laplace Neural Operator (PILNO) is a fast surrogate solver for partial differential equations.<n>It embeds physics into training through PDE, boundary condition, and initial condition residuals.<n>PILNO consistently improves accuracy in small-data settings, reduces run-to-run variability across random seeds, and achieves stronger generalization than purely data-driven baselines.
arXiv Detail & Related papers (2026-02-13T08:19:40Z) - Error Amplification Limits ANN-to-SNN Conversion in Continuous Control [64.99656514469972]
Spiking Neural Networks (SNNs) can achieve competitive performance by converting already existing well-trained Artificial Neural Networks (ANNs)<n>Existing conversion methods perform poorly in continuous control, where suitable baselines are largely absent.<n>We propose Cross-Step Residual Potential Initialization (CRPI), a lightweight training-free mechanism that carries over residual membrane potentials across decision steps to suppress temporally correlated errors.
arXiv Detail & Related papers (2026-01-29T14:28:00Z) - Identifying and Transferring Reasoning-Critical Neurons: Improving LLM Inference Reliability via Activation Steering [50.63386303357225]
We propose AdaRAS, a lightweight test-time framework that improves reasoning reliability by selectively intervening on neuron activations.<n>AdaRAS identifies Reasoning-Critical Neurons (RCNs) via a polarity-aware mean-difference criterion and adaptively steers their activations during inference.<n> Experiments on 10 mathematics and coding benchmarks demonstrate consistent improvements, including over 13% gains on AIME-24 and AIME-25.
arXiv Detail & Related papers (2026-01-27T17:53:01Z) - Physics-informed Neural Operator Learning for Nonlinear Grad-Shafranov Equation [18.564353542797946]
In magnetic confinement nuclear fusion, rapid and accurate solution of the Grad-Shafranov equation (GSE) is essential for real-time plasma control and analysis.<n>Traditional numerical solvers achieve high precision but are computationally prohibitive, while data-driven surrogates infer quickly but fail to enforce physical laws and generalize poorly beyond training distributions.<n>We present a Physics-Informed Neural Operator (PINO) that directly learns the GSE solution operator, mapping shape parameters of last closed flux surface to equilibrium solutions for realistic nonlinear current profiles.
arXiv Detail & Related papers (2025-11-24T13:46:38Z) - Human-in-the-loop Online Rejection Sampling for Robotic Manipulation [55.99788088622936]
Hi-ORS stabilizes value estimation by filtering out negatively rewarded samples during online fine-tuning.<n>Hi-ORS fine-tunes a pi-base policy to master contact-rich manipulation in just 1.5 hours of real-world training.
arXiv Detail & Related papers (2025-10-30T11:53:08Z) - Towards Universal Solvers: Using PGD Attack in Active Learning to Increase Generalizability of Neural Operators as Knowledge Distillation from Numerical PDE Solvers [3.780792537808271]
PDE solvers require fine space-time discretizations and local linearizations, leading to high memory cost and slow runtimes.<n>We propose an adversarial teacher-student distillation framework in which a differentiable numerical solver supervises a compact neural operator.<n>Experiments on Burgers and Navier-Stokes systems demonstrate that adversarial distillation substantially improves OOD while preserving the low parameter cost and fast inference of neural operators.
arXiv Detail & Related papers (2025-10-21T18:13:05Z) - ResAD: Normalized Residual Trajectory Modeling for End-to-End Autonomous Driving [64.42138266293202]
ResAD is a Normalized Residual Trajectory Modeling framework.<n>It reframes the learning task to predict the residual deviation from an inertial reference.<n>On the NAVSIM benchmark, ResAD achieves a state-of-the-art PDMS of 88.6 using a vanilla diffusion policy.
arXiv Detail & Related papers (2025-10-09T17:59:36Z) - Physics-Informed Multimodal Bearing Fault Classification under Variable Operating Conditions using Transfer Learning [0.46085106405479537]
This study proposes a physics-informed multimodal convolutional neural network (CNN) with a late fusion architecture.<n>The model incorporates a novel physics-informed loss function that penalizes physically implausible predictions.<n>Experiments on the Paderborn University dataset demonstrate that the proposed physics-informed approach consistently outperforms a non-physics-informed baseline.
arXiv Detail & Related papers (2025-08-11T01:32:09Z) - History-Aware Neural Operator: Robust Data-Driven Constitutive Modeling of Path-Dependent Materials [8.579506050944875]
This study presents an end-to-end learning framework for data-driven modeling of inelastic materials using neural operators.<n>We develop the History-Aware Neural Operator (HANO), an autoregressive model that predicts path-dependent material responses from short segments of recent strain-stress history.<n>We evaluate HANO on two benchmark problems: elastoplasticity with hardening and progressive anisotropic damage in brittle solids.
arXiv Detail & Related papers (2025-06-12T05:19:17Z) - Recurrent Neural Operators: Stable Long-Term PDE Prediction [0.0]
We propose Recurrent Neural Operators (RNOs) to integrate recurrent training into neural operator architectures.<n>RNOs apply the operator to their own predictions over a temporal window, effectively simulating inference-time dynamics during training.<n>We show that recurrent training can reduce the worst-case exponential error growth typical of teacher forcing to linear growth.
arXiv Detail & Related papers (2025-05-27T05:04:35Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.