NeurIPS 2024 ML4CFD Competition: Results and Retrospective Analysis
- URL: http://arxiv.org/abs/2506.08516v1
- Date: Tue, 10 Jun 2025 07:34:43 GMT
- Title: NeurIPS 2024 ML4CFD Competition: Results and Retrospective Analysis
- Authors: Mouadh Yagoubi, David Danan, Milad Leyli-Abadi, Ahmed Mazari, Jean-Patrick Brunet, Abbas Kabalan, Fabien Casenave, Yuxin Ma, Giovanni Catalani, Jean Fesquet, Jacob Helwig, Xuan Zhang, Haiyang Yu, Xavier Bertrand, Frederic Tost, Michael Baurheim, Joseph Morlier, Shuiwang Ji,
- Abstract summary: We organized the ML4CFD competition, centered on surrogate modeling for aerodynamic simulations over two-dimensional airfoils.<n>The competition attracted over 240 teams, who were provided with a curated dataset generated via OpenFOAM.<n>This retrospective analysis reviews the competition outcomes, highlighting several approaches that outperformed baselines.
- Score: 43.76958886074317
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of machine learning (ML) into the physical sciences is reshaping computational paradigms, offering the potential to accelerate demanding simulations such as computational fluid dynamics (CFD). Yet, persistent challenges in accuracy, generalization, and physical consistency hinder the practical deployment of ML models in scientific domains. To address these limitations and systematically benchmark progress, we organized the ML4CFD competition, centered on surrogate modeling for aerodynamic simulations over two-dimensional airfoils. The competition attracted over 240 teams, who were provided with a curated dataset generated via OpenFOAM and evaluated through a multi-criteria framework encompassing predictive accuracy, physical fidelity, computational efficiency, and out-of-distribution generalization. This retrospective analysis reviews the competition outcomes, highlighting several approaches that outperformed baselines under our global evaluation score. Notably, the top entry exceeded the performance of the original OpenFOAM solver on aggregate metrics, illustrating the promise of ML-based surrogates to outperform traditional solvers under tailored criteria. Drawing from these results, we analyze the key design principles of top submissions, assess the robustness of our evaluation framework, and offer guidance for future scientific ML challenges.
Related papers
- Machine Learning for Physical Simulation Challenge Results and Retrospective Analysis: Power Grid Use Case [5.382597165456225]
This paper addresses the growing computational challenges of power grid simulations, particularly with the increasing integration of renewable energy sources like wind and solar.<n>To tackle this, a competition was organized to develop AI-driven methods that accelerate power flow simulations by at least an order of magnitude.<n>The paper provides a comprehensive overview of the Machine Learning for Physical Simulation (ML4PhySim) competition, detailing the benchmark suite, analyzing top-performing solutions.
arXiv Detail & Related papers (2025-05-02T09:58:27Z) - Model Utility Law: Evaluating LLMs beyond Performance through Mechanism Interpretable Metric [99.56567010306807]
Large Language Models (LLMs) have become indispensable across academia, industry, and daily applications.<n>One core challenge of evaluation in the large language model (LLM) era is the generalization issue.<n>We propose Model Utilization Index (MUI), a mechanism interpretability enhanced metric that complements traditional performance scores.
arXiv Detail & Related papers (2025-04-10T04:09:47Z) - Geometry Matters: Benchmarking Scientific ML Approaches for Flow Prediction around Complex Geometries [23.111935712144277]
Rapid and accurate simulations of fluid dynamics around complicated geometric bodies are critical in a variety of engineering and scientific applications.<n>While scientific machine learning (SciML) has shown considerable promise, most studies in this field are limited to simple geometries.<n>This paper addresses this gap by benchmarking diverse SciML models for fluid flow prediction over intricate geometries.
arXiv Detail & Related papers (2024-12-31T00:23:15Z) - Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey [51.87875066383221]
This paper introduces fundamental concepts, traditional methods, and benchmark datasets, then examine the various roles Machine Learning plays in improving CFD.
We highlight real-world applications of ML for CFD in critical scientific and engineering disciplines, including aerodynamics, combustion, atmosphere & ocean science, biology fluid, plasma, symbolic regression, and reduced order modeling.
We draw the conclusion that ML is poised to significantly transform CFD research by enhancing simulation accuracy, reducing computational time, and enabling more complex analyses of fluid dynamics.
arXiv Detail & Related papers (2024-08-22T07:33:11Z) - Benchmarks as Microscopes: A Call for Model Metrology [76.64402390208576]
Modern language models (LMs) pose a new challenge in capability assessment.
To be confident in our metrics, we need a new discipline of model metrology.
arXiv Detail & Related papers (2024-07-22T17:52:12Z) - NeurIPS 2024 ML4CFD Competition: Harnessing Machine Learning for Computational Fluid Dynamics in Airfoil Design [15.301599529509057]
The challenge centers on a task fundamental to a well-established physical application: airfoil design simulation.
This competition represents a pioneering effort in exploring ML-driven surrogate methods.
The competition offers online training and evaluation for all participating solutions.
arXiv Detail & Related papers (2024-06-30T21:48:38Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.<n>We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.<n>Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - ML4PhySim : Machine Learning for Physical Simulations Challenge (The
airfoil design) [16.140736542578562]
The aim of this competition is to encourage the development of new ML techniques to solve physical problems.
We propose learning a task representing the airfoil design simulation, using a dataset called AirfRANS.
To the best of our knowledge, this is the first competition addressing the use of ML-based surrogate approaches to improve the trade-off computational cost/accuracy of physical simulation.
arXiv Detail & Related papers (2024-03-03T22:10:21Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.