Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning
- URL: http://arxiv.org/abs/2202.03665v1
- Date: Wed, 2 Feb 2022 17:24:16 GMT
- Title: Accelerating Part-Scale Simulation in Liquid Metal Jet Additive
Manufacturing via Operator Learning
- Authors: S{\o}ren Taverniers, Svyatoslav Korneev, Kyle M. Pietrzyk, Morad
Behandish
- Abstract summary: Part-scale predictions require many small-scale simulations.
A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations.
We apply an operator learning approach to learn a mapping between initial and final states of the droplet coalescence process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting part quality for additive manufacturing (AM) processes requires
high-fidelity numerical simulation of partial differential equations (PDEs)
governing process multiphysics on a scale of minimum manufacturable features.
This makes part-scale predictions computationally demanding, especially when
they require many small-scale simulations. We consider drop-on-demand liquid
metal jetting (LMJ) as an illustrative example of such computational
complexity. A model describing droplet coalescence for LMJ may include coupled
incompressible fluid flow, heat transfer, and phase change equations.
Numerically solving these equations becomes prohibitively expensive when
simulating the build process for a full part consisting of thousands to
millions of droplets. Reduced-order models (ROMs) based on neural networks (NN)
or k-nearest neighbor (kNN) algorithms have been built to replace the original
physics-based solver and are computationally tractable for part-level
simulations. However, their quick inference capabilities often come at the
expense of accuracy, robustness, and generalizability. We apply an operator
learning (OL) approach to learn a mapping between initial and final states of
the droplet coalescence process for enabling rapid and accurate part-scale
build simulation. Preliminary results suggest that OL requires
order-of-magnitude fewer data points than a kNN approach and is generalizable
beyond the training set while achieving similar prediction error.
Related papers
- COmoving Computer Acceleration (COCA): $N$-body simulations in an emulated frame of reference [0.0]
We introduce COmoving Computer Acceleration (COCA), a hybrid framework interfacing machine learning and $N$-body simulations.
The correct physical equations of motion are solved in an emulated frame of reference, so that any emulation error is corrected by design.
COCA efficiently reduces emulation errors in particle trajectories, requiring far fewer force evaluations than running the corresponding simulation without ML.
arXiv Detail & Related papers (2024-09-03T17:27:12Z) - chemtrain: Learning Deep Potential Models via Automatic Differentiation and Statistical Physics [0.0]
Neural Networks (NNs) are promising models for refining the accuracy of molecular dynamics.
Chemtrain is a framework to learn sophisticated NN potential models through customizable training routines and advanced training algorithms.
arXiv Detail & Related papers (2024-08-28T15:14:58Z) - Physics-informed MeshGraphNets (PI-MGNs): Neural finite element solvers
for non-stationary and nonlinear simulations on arbitrary meshes [13.41003911618347]
This work introduces PI-MGNs, a hybrid approach that combines PINNs and MGNs to solve non-stationary and nonlinear partial differential equations (PDEs) on arbitrary meshes.
Results show that the model scales well to large and complex meshes, although it is trained on small generic meshes only.
arXiv Detail & Related papers (2024-02-16T13:34:51Z) - Application of machine learning technique for a fast forecast of
aggregation kinetics in space-inhomogeneous systems [0.0]
We show how to reduce the amount of direct computations with the use of modern machine learning (ML) techniques.
We demonstrate that the ML predictions for the space distribution of aggregates and their size distribution requires drastically less computation time and agrees fairly well with the results of direct numerical simulations.
arXiv Detail & Related papers (2023-12-07T19:50:40Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Learning Large-scale Subsurface Simulations with a Hybrid Graph Network
Simulator [57.57321628587564]
We introduce Hybrid Graph Network Simulator (HGNS) for learning reservoir simulations of 3D subsurface fluid flows.
HGNS consists of a subsurface graph neural network (SGNN) to model the evolution of fluid flows, and a 3D-U-Net to model the evolution of pressure.
Using an industry-standard subsurface flow dataset (SPE-10) with 1.1 million cells, we demonstrate that HGNS is able to reduce the inference time up to 18 times compared to standard subsurface simulators.
arXiv Detail & Related papers (2022-06-15T17:29:57Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Machine learning for rapid discovery of laminar flow channel wall
modifications that enhance heat transfer [56.34005280792013]
We present a combination of accurate numerical simulations of arbitrary, flat, and non-flat channels and machine learning models predicting drag coefficient and Stanton number.
We show that convolutional neural networks (CNN) can accurately predict the target properties at a fraction of the time of numerical simulations.
arXiv Detail & Related papers (2021-01-19T16:14:02Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.