DoMINO: A Decomposable Multi-scale Iterative Neural Operator for Modeling Large Scale Engineering Simulations
- URL: http://arxiv.org/abs/2501.13350v1
- Date: Thu, 23 Jan 2025 03:28:10 GMT
- Title: DoMINO: A Decomposable Multi-scale Iterative Neural Operator for Modeling Large Scale Engineering Simulations
- Authors: Rishikesh Ranade, Mohammad Amin Nabian, Kaustubh Tangsali, Alexey Kamenev, Oliver Hennigh, Ram Cherukuri, Sanjay Choudhry,
- Abstract summary: DoMINO is a point cloudbased machine learning model that uses local geometric information to predict flow fields on discrete points.
DoMINO is validated for the automotive aerodynamics use case using the DrivAerML dataset.
- Score: 2.300471499347615
- License:
- Abstract: Numerical simulations play a critical role in design and development of engineering products and processes. Traditional computational methods, such as CFD, can provide accurate predictions but are computationally expensive, particularly for complex geometries. Several machine learning (ML) models have been proposed in the literature to significantly reduce computation time while maintaining acceptable accuracy. However, ML models often face limitations in terms of accuracy and scalability and depend on significant mesh downsampling, which can negatively affect prediction accuracy and generalization. In this work, we propose a novel ML model architecture, DoMINO (Decomposable Multi-scale Iterative Neural Operator) developed in NVIDIA Modulus to address the various challenges of machine learning based surrogate modeling of engineering simulations. DoMINO is a point cloudbased ML model that uses local geometric information to predict flow fields on discrete points. The DoMINO model is validated for the automotive aerodynamics use case using the DrivAerML dataset. Through our experiments we demonstrate the scalability, performance, accuracy and generalization of our model to both in-distribution and out-of-distribution testing samples. Moreover, the results are analyzed using a range of engineering specific metrics important for validating numerical simulations.
Related papers
- Geometry Matters: Benchmarking Scientific ML Approaches for Flow Prediction around Complex Geometries [23.111935712144277]
Rapid yet accurate simulations of fluid dynamics around complex geometries is critical in a variety of engineering and scientific applications.
While scientific machine learning (SciML) has shown promise, most studies are constrained to simple geometries.
This study addresses this gap by benchmarking diverse SciML models for fluid flow prediction over intricate geometries.
arXiv Detail & Related papers (2024-12-31T00:23:15Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Multi-GPU Approach for Training of Graph ML Models on large CFD Meshes [0.0]
Mesh-based numerical solvers are an important part in many design tool chains.
Machine Learning based surrogate models are fast in predicting approximate solutions but often lack accuracy.
This paper scales a state-of-the-art surrogate model from the domain of graph-based machine learning to industry-relevant mesh sizes.
arXiv Detail & Related papers (2023-07-25T15:49:25Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning [0.0]
Part-scale predictions require many small-scale simulations.
A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations.
We apply an operator learning approach to learn a mapping between initial and final states of the droplet coalescence process.
arXiv Detail & Related papers (2022-02-02T17:24:16Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Surrogate Modelling for Injection Molding Processes using Machine
Learning [0.23090185577016442]
Injection molding is one of the most popular manufacturing methods for the modeling of complex plastic objects.
We propose a baseline for a data processing pipeline that includes the extraction of data from Moldflow simulation projects.
We evaluate machine learning models for fill time and deflection distribution prediction and provide baseline values of MSE and RMSE metrics.
arXiv Detail & Related papers (2021-07-30T12:13:52Z) - A Taylor Based Sampling Scheme for Machine Learning in Computational
Physics [0.0]
We take advantage of the ability to generate data using numerical simulations programs to train Machine Learning models better.
We elaborate a new data sampling scheme based on Taylor approximation to reduce the error of a Deep Neural Network (DNN) when learning the solution of an ordinary differential equations (ODE) system.
arXiv Detail & Related papers (2021-01-20T12:56:09Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.