AI-Accelerated Flow Simulation: A Robust Auto-Regressive Framework for Long-Term CFD Forecasting
- URL: http://arxiv.org/abs/2412.05657v3
- Date: Fri, 18 Jul 2025 01:15:54 GMT
- Title: AI-Accelerated Flow Simulation: A Robust Auto-Regressive Framework for Long-Term CFD Forecasting
- Authors: Sunwoong Yang, Ricardo Vinuesa, Namwoo Kang,
- Abstract summary: We introduce the first implementation of the two-step derivative Adams-Bashforth method specifically tailored for data-driven AR prediction.<n>We develop three novel adaptive weighting strategies that dynamically adjust the importance of different future time steps.<n>Our framework accurately predicts 350 future steps reducing mean squared error from 0.125 to 0.002.
- Score: 2.3964255330849356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study addresses the critical challenge of error accumulation in spatio-temporal auto-regressive (AR) predictions within scientific machine learning models by exploring temporal integration schemes and adaptive multi-step rollout strategies. We introduce the first implementation of the two-step Adams-Bashforth method specifically tailored for data-driven AR prediction, leveraging historical derivative information to enhance numerical stability without additional computational overhead. To validate our approach, we systematically evaluate time integration schemes across canonical 2D PDEs before extending to complex Navier-Stokes cylinder vortex shedding dynamics. Additionally, we develop three novel adaptive weighting strategies that dynamically adjust the importance of different future time steps during multi-step rollout training. Our analysis reveals that as physical complexity increases, such sophisticated rollout techniques become essential, with the Adams-Bashforth scheme demonstrating consistent robustness across investigated systems and our best adaptive approach delivering an 89% improvement over conventional fixed-weight methods while maintaining similar computational costs. For the complex Navier-Stokes vortex shedding problem, despite using an extremely lightweight graph neural network with just 1,177 trainable parameters and training on only 50 snapshots, our framework accurately predicts 350 future time steps reducing mean squared error from 0.125 (single-step direct prediction) to 0.002 (Adams-Bashforth with proposed multi-step rollout). Our integrated methodology demonstrates an 83% improvement over standard noise injection techniques and maintains robustness under severe spatial constraints; specifically, when trained on only a partial spatial domain, it still achieves 58% and 27% improvements over direct prediction and forward Euler methods, respectively.
Related papers
- PnP-DA: Towards Principled Plug-and-Play Integration of Variational Data Assimilation and Generative Models [0.1052166918701117]
Earth system modeling presents a fundamental challenge in scientific computing.<n>Even the most powerful AI- or physics-based forecast system suffer from gradual error accumulation.<n>We propose a Plug-and-Play algorithm that alternates a lightweight, gradient-based analysis update with a single forward pass through a pretrained prior conditioned on the background forecast.
arXiv Detail & Related papers (2025-08-01T05:19:19Z) - Elucidated Rolling Diffusion Models for Probabilistic Weather Forecasting [52.6508222408558]
We introduce Elucidated Rolling Diffusion Models (ERDM)<n>ERDM is the first framework to unify a rolling forecast structure with the principled, performant design of Elucidated Diffusion Models (EDM)<n>On 2D Navier-Stokes simulations and ERA5 global weather forecasting at 1.5circ resolution, ERDM consistently outperforms key diffusion-based baselines.
arXiv Detail & Related papers (2025-06-24T21:44:31Z) - Flow-GRPO: Training Flow Matching Models via Online RL [75.70017261794422]
We propose Flow-GRPO, the first method integrating online reinforcement learning (RL) into flow matching models.<n>Our approach uses two key strategies: (1) an ODE-to-SDE conversion that transforms a deterministic Ordinary Equation (ODE) into an equivalent Differential Equation (SDE) that matches the original model's marginal distribution at all timesteps; and (2) a Denoising Reduction strategy that reduces training denoising steps while retaining the original inference timestep number.
arXiv Detail & Related papers (2025-05-08T17:58:45Z) - An Adaptive Framework for Autoregressive Forecasting in CFD Using Hybrid Modal Decomposition and Deep Learning [3.1337872355726084]
This work presents, to the best of the authors' knowledge, the first generalizable and fully data-driven adaptive framework designed to stabilize deep learning (DL) autoregressive forecasting models over long time horizons.<n>The proposed methodology alternates between two phases: (i) predicting the evolution of the flow field over a selected time interval using a trained DL model, and (ii) updating the model with newly generated CFD data when stability degrades, thus maintaining accurate long-term forecasting.<n>The framework is validated across three increasingly complex flow regimes, from laminar to turbulent, demonstrating from 30 % to 95
arXiv Detail & Related papers (2025-05-02T18:33:41Z) - Learning with Imperfect Models: When Multi-step Prediction Mitigates Compounding Error [25.387541996071093]
Compounding error, where small prediction mistakes accumulate over time, presents a major challenge in learning-based control.
One approach to mitigate compounding error is to train multi-step predictors directly, rather than relying on autoregressive rollout of a single-step model.
We show that when the model class is well-specified and accurately captures the system dynamics, single-step models achieve lower prediction error.
On the other hand, when the model class is misspecified due to partial observability, direct multi-step predictors can significantly reduce bias and thus outperform single-step approaches.
arXiv Detail & Related papers (2025-04-02T14:18:52Z) - Unsupervised Parameter Efficient Source-free Post-pretraining [52.27955794126508]
We introduce UpStep, an Unsupervised.<n>Source-free post-pretraining approach to adapt a base model from a source domain to a target domain.<n>We use various general backbone architectures, both supervised and unsupervised, trained on Imagenet as our base model.
arXiv Detail & Related papers (2025-02-28T18:54:51Z) - HybridTrack: A Hybrid Approach for Robust Multi-Object Tracking [7.916733469603948]
HybridTrack is a novel 3D multi-object tracking approach for vehicles.<n>It integrates a data-driven Kalman Filter (KF) within a tracking-by-detection paradigm.<n>It achieves 82.72% HOTA accuracy, significantly outperforming state-of-the-art methods.
arXiv Detail & Related papers (2025-01-02T14:17:19Z) - Optimal Transport-Based Displacement Interpolation with Data Augmentation for Reduced Order Modeling of Nonlinear Dynamical Systems [0.0]
We present a novel reduced-order Model (ROM) that exploits optimal transport theory and displacement to enhance the representation of nonlinear dynamics in complex systems.
We show improved accuracy and efficiency in predicting complex system behaviors, indicating the potential of this approach for a wide range of applications in computational physics and engineering.
arXiv Detail & Related papers (2024-11-13T16:29:33Z) - Imitation Learning from Observations: An Autoregressive Mixture of Experts Approach [2.4427666827706074]
This paper presents a novel approach to imitation learning from observations, where an autoregressive mixture of experts model is deployed to fit the underlying policy.
The effectiveness of the proposed framework is validated using two autonomous driving datasets collected from human demonstrations.
arXiv Detail & Related papers (2024-11-12T22:56:28Z) - Annealed Winner-Takes-All for Motion Forecasting [48.200282332176094]
We show how an aWTA loss can be integrated with state-of-the-art motion forecasting models to enhance their performance.
Our approach can be easily incorporated into any trajectory prediction model normally trained using WTA.
arXiv Detail & Related papers (2024-09-17T13:26:17Z) - Motion Forecasting via Model-Based Risk Minimization [8.766024024417316]
We propose a novel sampling method applicable to trajectory prediction based on the predictions of multiple models.
We first show that conventional sampling based on predicted probabilities can degrade performance due to missing alignment between models.
By using state-of-the-art models as base learners, our approach constructs diverse and effective ensembles for optimal trajectory sampling.
arXiv Detail & Related papers (2024-09-16T09:03:28Z) - Adaptive Planning with Generative Models under Uncertainty [20.922248169620783]
Planning with generative models has emerged as an effective decision-making paradigm across a wide range of domains.
While continuous replanning at each timestep might seem intuitive because it allows decisions to be made based on the most recent environmental observations, it results in substantial computational challenges.
Our work addresses this challenge by introducing a simple adaptive planning policy that leverages the generative model's ability to predict long-horizon state trajectories.
arXiv Detail & Related papers (2024-08-02T18:07:53Z) - Learning Long-Horizon Predictions for Quadrotor Dynamics [48.08477275522024]
We study the key design choices for efficiently learning long-horizon prediction dynamics for quadrotors.
We show that sequential modeling techniques showcase their advantage in minimizing compounding errors compared to other types of solutions.
We propose a novel decoupled dynamics learning approach, which further simplifies the learning process while also enhancing the approach modularity.
arXiv Detail & Related papers (2024-07-17T19:06:47Z) - Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a model-agnostic framework for finding sequences of models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
We find that, on average, a 2% reduction in predictive power leads to a 30% improvement in stability.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Not All Steps are Equal: Efficient Generation with Progressive Diffusion
Models [62.155612146799314]
We propose a novel two-stage training strategy termed Step-Adaptive Training.
In the initial stage, a base denoising model is trained to encompass all timesteps.
We partition the timesteps into distinct groups, fine-tuning the model within each group to achieve specialized denoising capabilities.
arXiv Detail & Related papers (2023-12-20T03:32:58Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Stabilizing Machine Learning Prediction of Dynamics: Noise and
Noise-inspired Regularization [58.720142291102135]
Recent has shown that machine learning (ML) models can be trained to accurately forecast the dynamics of chaotic dynamical systems.
In the absence of mitigating techniques, this technique can result in artificially rapid error growth, leading to inaccurate predictions and/or climate instability.
We introduce Linearized Multi-Noise Training (LMNT), a regularization technique that deterministically approximates the effect of many small, independent noise realizations added to the model input during training.
arXiv Detail & Related papers (2022-11-09T23:40:52Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Learning Unstable Dynamics with One Minute of Data: A
Differentiation-based Gaussian Process Approach [47.045588297201434]
We show how to exploit the differentiability of Gaussian processes to create a state-dependent linearized approximation of the true continuous dynamics.
We validate our approach by iteratively learning the system dynamics of an unstable system such as a 9-D segway.
arXiv Detail & Related papers (2021-03-08T05:08:47Z) - Learning Accurate Long-term Dynamics for Model-based Reinforcement
Learning [7.194382512848327]
We propose a new parametrization to supervised learning on state-action data to stably predict at longer horizons.
Our results in simulated and experimental robotic tasks show that our trajectory-based models yield significantly more accurate long term predictions.
arXiv Detail & Related papers (2020-12-16T18:47:37Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.