FLUID-LLM: Learning Computational Fluid Dynamics with Spatiotemporal-aware Large Language Models
- URL: http://arxiv.org/abs/2406.04501v1
- Date: Thu, 6 Jun 2024 20:55:40 GMT
- Title: FLUID-LLM: Learning Computational Fluid Dynamics with Spatiotemporal-aware Large Language Models
- Authors: Max Zhu, Adrián Bazaga, Pietro Liò,
- Abstract summary: Large language models (LLMs) have shown remarkable pattern recognition and reasoning abilities.
We introduce FLUID-LLM, a novel framework combining pre-trained LLMs with pre-aware encoding to predict unsteady fluid dynamics.
Our results demonstrate that FLUID-LLM effectively integratestemporal information into pre-trained LLMs, enhancing CFD task performance.
- Score: 15.964726158869777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning computational fluid dynamics (CFD) traditionally relies on computationally intensive simulations of the Navier-Stokes equations. Recently, large language models (LLMs) have shown remarkable pattern recognition and reasoning abilities in natural language processing (NLP) and computer vision (CV). However, these models struggle with the complex geometries inherent in fluid dynamics. We introduce FLUID-LLM, a novel framework combining pre-trained LLMs with spatiotemporal-aware encoding to predict unsteady fluid dynamics. Our approach leverages the temporal autoregressive abilities of LLMs alongside spatial-aware layers, bridging the gap between previous CFD prediction methods. Evaluations on standard benchmarks reveal significant performance improvements across various fluid datasets. Our results demonstrate that FLUID-LLM effectively integrates spatiotemporal information into pre-trained LLMs, enhancing CFD task performance.
Related papers
- Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law [3.281128493853064]
A language model trained primarily on texts achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering.
We present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs.
arXiv Detail & Related papers (2024-02-01T17:28:10Z) - Differentiable Turbulence II [0.0]
We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations.
We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - Asynchronous Multi-Model Dynamic Federated Learning over Wireless
Networks: Theory, Modeling, and Optimization [20.741776617129208]
Federated learning (FL) has emerged as a key technique for distributed machine learning (ML)
We first formulate rectangular scheduling steps and functions to capture the impact of system parameters on learning performance.
Our analysis sheds light on the joint impact of device training variables and asynchronous scheduling decisions.
arXiv Detail & Related papers (2023-05-22T21:39:38Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Machine Learning model for gas-liquid interface reconstruction in CFD
numerical simulations [59.84561168501493]
The volume of fluid (VoF) method is widely used in multi-phase flow simulations to track and locate the interface between two immiscible fluids.
A major bottleneck of the VoF method is the interface reconstruction step due to its high computational cost and low accuracy on unstructured grids.
We propose a machine learning enhanced VoF method based on Graph Neural Networks (GNN) to accelerate the interface reconstruction on general unstructured meshes.
arXiv Detail & Related papers (2022-07-12T17:07:46Z) - Real-time simulation of parameter-dependent fluid flows through deep
learning-based reduced order models [0.2538209532048866]
Reduced order models (ROMs) provide reliable approximations to parameter-dependent fluid dynamics problems in rapid times.
Deep learning (DL)-based ROMs overcome all these limitations by learning in a non-intrusive way both the nonlinear trial manifold and the reduced dynamics.
The resulting POD-DL-ROMs are shown to provide accurate results in almost real-time for the flow around a cylinder benchmark, the fluid-structure interaction between an elastic beam attached to a fixed, rigid block and a laminar incompressible flow, and the blood flow in a cerebral aneurysm.
arXiv Detail & Related papers (2021-06-10T13:07:33Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Learning Incompressible Fluid Dynamics from Scratch -- Towards Fast,
Differentiable Fluid Models that Generalize [7.707887663337803]
Recent deep learning based approaches promise vast speed-ups but do not generalize to new fluid domains.
We propose a novel physics-constrained training approach that generalizes to new fluid domains.
We present an interactive real-time demo to show the speed and generalization capabilities of our trained models.
arXiv Detail & Related papers (2020-06-15T20:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.