An extensible Benchmarking Graph-Mesh dataset for studying Steady-State
Incompressible Navier-Stokes Equations
- URL: http://arxiv.org/abs/2206.14709v1
- Date: Wed, 29 Jun 2022 15:18:30 GMT
- Title: An extensible Benchmarking Graph-Mesh dataset for studying Steady-State
Incompressible Navier-Stokes Equations
- Authors: Florent Bonnet, Jocelyn Ahmed Mazari, Thibaut Munzer, Pierre Yser,
Patrick Gallinari
- Abstract summary: We propose a 2-D graph-mesh dataset to study the airflow over airfoils at high Reynolds regime.
We also introduce metrics on the stress forces over the airfoil in order to evaluate GDL models on important physical quantities.
- Score: 9.067455882308073
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent progress in \emph{Geometric Deep Learning} (GDL) has shown its
potential to provide powerful data-driven models. This gives momentum to
explore new methods for learning physical systems governed by \emph{Partial
Differential Equations} (PDEs) from Graph-Mesh data. However, despite the
efforts and recent achievements, several research directions remain unexplored
and progress is still far from satisfying the physical requirements of
real-world phenomena. One of the major impediments is the absence of
benchmarking datasets and common physics evaluation protocols. In this paper,
we propose a 2-D graph-mesh dataset to study the airflow over airfoils at high
Reynolds regime (from $10^6$ and beyond). We also introduce metrics on the
stress forces over the airfoil in order to evaluate GDL models on important
physical quantities. Moreover, we provide extensive GDL baselines.
Related papers
- Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - A Survey of Data-Efficient Graph Learning [16.053913182723143]
We introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier.
We systematically review recent advances on several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning.
arXiv Detail & Related papers (2024-02-01T09:28:48Z) - Eagle: Large-Scale Learning of Turbulent Fluid Dynamics with Mesh
Transformers [23.589419066824306]
Estimating fluid dynamics is a notoriously hard problem to solve.
We introduce a new model, method and benchmark for the problem.
We show that our transformer outperforms state-of-the-art performance on, both, existing synthetic and real datasets.
arXiv Detail & Related papers (2023-02-16T12:59:08Z) - AirfRANS: High Fidelity Computational Fluid Dynamics Dataset for
Approximating Reynolds-Averaged Navier-Stokes Solutions [9.561442022004808]
We develop AirfRANS, a dataset for studying the two-dimensional incompressible steady-state Reynolds-Averaged Navier-Stokes equations over airfoils at a subsonic regime.
We also introduce metrics on the stress forces at the surface of geometries and visualization of boundary layers to assess the capabilities of models to accurately predict the meaningful information of the problem.
arXiv Detail & Related papers (2022-12-15T00:41:09Z) - Diving into Unified Data-Model Sparsity for Class-Imbalanced Graph
Representation Learning [30.23894624193583]
Graph Neural Networks (GNNs) training upon non-Euclidean graph data often encounters relatively higher time costs.
We develop a unified data-model dynamic sparsity framework named Graph Decantation (GraphDec) to address challenges brought by training upon a massive class-imbalanced graph data.
arXiv Detail & Related papers (2022-10-01T01:47:00Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters [2.169919643934826]
We formulate the problem as regression with the Fused Gromov-Wasserstein (FGW) loss.
We propose a predictive model relying on a FGW barycenter whose weights depend on inputs.
arXiv Detail & Related papers (2022-02-08T12:15:39Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Learning to Simulate Complex Physics with Graph Networks [68.43901833812448]
We present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains.
Our framework---which we term "Graph Network-based Simulators" (GNS)--represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing.
Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time.
arXiv Detail & Related papers (2020-02-21T16:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.