Meta-Learning for Airflow Simulations with Graph Neural Networks
- URL: http://arxiv.org/abs/2306.10624v1
- Date: Sun, 18 Jun 2023 19:25:13 GMT
- Title: Meta-Learning for Airflow Simulations with Graph Neural Networks
- Authors: Wenzhuo Liu, Mouadh Yagoubi, Marc Schoenauer
- Abstract summary: We present a meta-learning approach to enhance the performance of learned models on out-of-distribution (OoD) samples.
Specifically, we set the airflow simulation in CFD over various airfoils as a meta-learning problem, where each set of examples defined on a single airfoil shape is treated as a separate task.
We experimentally demonstrate the efficiency of the proposed approach for improving the OoD generalization performance of learned models.
- Score: 3.52359746858894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of numerical simulation is of significant importance for the design
and management of real-world systems, with partial differential equations
(PDEs) being a commonly used mathematical modeling tool. However, solving PDEs
remains still a challenge, as commonly used traditional numerical solvers often
require high computational costs. As a result, data-driven methods leveraging
machine learning (more particularly Deep Learning) algorithms have been
increasingly proposed to learn models that can predict solutions to complex
PDEs, such as those arising in computational fluid dynamics (CFD). However,
these methods are known to suffer from poor generalization performance on
out-of-distribution (OoD) samples, highlighting the need for more efficient
approaches. To this end, we present a meta-learning approach to enhance the
performance of learned models on OoD samples. Specifically, we set the airflow
simulation in CFD over various airfoils as a meta-learning problem, where each
set of examples defined on a single airfoil shape is treated as a separate
task. Through the use of model-agnostic meta-learning (MAML), we learn a
meta-learner capable of adapting to new tasks, i.e., previously unseen airfoil
shapes, using only a small amount of task-specific data. We experimentally
demonstrate the efficiency of the proposed approach for improving the OoD
generalization performance of learned models while maintaining efficiency.
Related papers
- Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning [45.78096783448304]
In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning.
We mine unlabeled PDE data without simulated solutions, and we pretrain neural operators with physics-inspired reconstruction-based proxy tasks.
Our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models.
arXiv Detail & Related papers (2024-02-24T06:27:33Z) - Deep Learning-based surrogate models for parametrized PDEs: handling
geometric variability through graph neural networks [0.0]
This work explores the potential usage of graph neural networks (GNNs) for the simulation of time-dependent PDEs.
We propose a systematic strategy to build surrogate models based on a data-driven time-stepping scheme.
We show that GNNs can provide a valid alternative to traditional surrogate models in terms of computational efficiency and generalization to new scenarios.
arXiv Detail & Related papers (2023-08-03T08:14:28Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Using Gradient to Boost the Generalization Performance of Deep Learning
Models for Fluid Dynamics [0.0]
We present a novel work to increase the generalization capabilities of Deep Learning.
Our strategy has shown good results towards a better generalization of DL networks.
arXiv Detail & Related papers (2022-10-09T10:20:09Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.