Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive
Manufacturing Model with Operator Learning
- URL: http://arxiv.org/abs/2308.02462v1
- Date: Fri, 4 Aug 2023 17:00:34 GMT
- Title: Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive
Manufacturing Model with Operator Learning
- Authors: Mahmoud Yaseen, Dewen Yushu, Peter German, Xu Wu
- Abstract summary: The present work is to construct a fast and accurate reduced-order model (ROM) for an additive manufacturing (AM) model.
We benchmarked the performance of these OL methods against a conventional deep neural network (DNN)-based ROM.
- Score: 1.4528756508275622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One predominant challenge in additive manufacturing (AM) is to achieve
specific material properties by manipulating manufacturing process parameters
during the runtime. Such manipulation tends to increase the computational load
imposed on existing simulation tools employed in AM. The goal of the present
work is to construct a fast and accurate reduced-order model (ROM) for an AM
model developed within the Multiphysics Object-Oriented Simulation Environment
(MOOSE) framework, ultimately reducing the time/cost of AM control and
optimization processes. Our adoption of the operator learning (OL) approach
enabled us to learn a family of differential equations produced by altering
process variables in the laser's Gaussian point heat source. More specifically,
we used the Fourier neural operator (FNO) and deep operator network (DeepONet)
to develop ROMs for time-dependent responses. Furthermore, we benchmarked the
performance of these OL methods against a conventional deep neural network
(DNN)-based ROM. Ultimately, we found that OL methods offer comparable
performance and, in terms of accuracy and generalizability, even outperform DNN
at predicting scalar model responses. The DNN-based ROM afforded the fastest
training time. Furthermore, all the ROMs were faster than the original MOOSE
model yet still provided accurate predictions. FNO had a smaller mean
prediction error than DeepONet, with a larger variance for time-dependent
responses. Unlike DNN, both FNO and DeepONet were able to simulate time series
data without the need for dimensionality reduction techniques. The present work
can help facilitate the AM optimization process by enabling faster execution of
simulation tools while still preserving evaluation accuracy.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Deep Learning for Fast Inference of Mechanistic Models' Parameters [0.28675177318965045]
We propose using Deep Neural Networks (NN) for directly predicting parameters of mechanistic models given observations.
We consider a training procedure that combines Neural Networks and mechanistic models.
We find that, while Neural Network estimates are slightly improved by further fitting, these estimates are measurably better than the fitting procedure alone.
arXiv Detail & Related papers (2023-12-05T22:16:54Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Reduced Order Modeling of a MOOSE-based Advanced Manufacturing Model
with Operator Learning [2.517043342442487]
Advanced Manufacturing (AM) has gained significant interest in the nuclear community for its potential application on nuclear materials.
One challenge is to obtain desired material properties via controlling the manufacturing process during runtime.
Intelligent AM based on deep reinforcement learning (DRL) relies on an automated process-level control mechanism to generate optimal design variables.
arXiv Detail & Related papers (2023-08-18T17:38:00Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators [5.553959304125023]
We theoretically analyze the mean squared error of DNNs that use memristors to compute matrix-vector multiplications (MVM)
We take into account both the quantization noise, due to the necessity of reducing the DNN model size, and the programming noise, stemming from the variability during the programming of the memristance value.
The proposed method is almost two order of magnitude faster than Monte-Carlo simulation, thus making it possible to optimize the implementation parameters to achieve minimal error for a given power constraint.
arXiv Detail & Related papers (2022-05-03T18:10:43Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Convolutional-Recurrent Neural Network Proxy for Robust Optimization and
Closed-Loop Reservoir Management [0.0]
A convolutional-recurrent neural network (CNN-RNN) proxy model is developed to predict well-by-well oil and water rates.
This capability enables the estimation of the objective function and nonlinear constraint values required for robust optimization.
arXiv Detail & Related papers (2022-03-14T22:11:17Z) - Can we learn gradients by Hamiltonian Neural Networks? [68.8204255655161]
We propose a meta-learner based on ODE neural networks that learns gradients.
We demonstrate that our method outperforms a meta-learner based on LSTM for an artificial task and the MNIST dataset with ReLU activations in the optimizee.
arXiv Detail & Related papers (2021-10-31T18:35:10Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.