Discovery of Spatter Constitutive Models in Additive Manufacturing Using Machine Learning
- URL: http://arxiv.org/abs/2501.08922v2
- Date: Tue, 04 Feb 2025 16:56:24 GMT
- Title: Discovery of Spatter Constitutive Models in Additive Manufacturing Using Machine Learning
- Authors: Olabode T. Ajenifujah, Amir Barati Farimani,
- Abstract summary: One of the key challenges in AM is achieving consistent print quality.
Melt pool dynamics is crucial for enhancing process stability and part quality.
We developed a framework to support decision-making towards AM process operations.
- Score: 7.136205674624813
- License:
- Abstract: Additive manufacturing (AM) is a rapidly evolving technology that has attracted applications across a wide range of fields due to its ability to fabricate complex geometries. However, one of the key challenges in AM is achieving consistent print quality. This inconsistency is often attributed to uncontrolled melt pool dynamics, partly caused by spatter which can lead to defects. Therefore, capturing and controlling the evolution of the melt pool is crucial for enhancing process stability and part quality. In this study, we developed a framework to support decision-making towards efficient AM process operations, capable of facilitating quality control and minimizing defects via machine learning (ML) and polynomial symbolic regression models. We implemented experimentally validated computational tools, specifically for laser powder bed fusion (LPBF) processes as a cost-effective approach to collect large datasets. For a dataset consisting of 281 varying process conditions, parameters such as melt pool dimensions (length, width, depth), melt pool geometry (area, volume), and volume indicated as spatter were extracted. Using machine learning (ML) and polynomial symbolic regression models, a high R2 of over 95 % was achieved in predicting the melt pool dimensions and geometry features on both the training and testing datasets, with either process conditions (power and velocity) or melt pool dimensions as the model inputs. In the case of volume indicated as spatter the value of the R2 improved after logarithmic transforming the model inputs, which were either the process conditions or the melt pool dimensions. Among the investigated ML models, the ExtraTree model achieved the highest R2 values of 96.7 % and 87.5 %.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Machine learning surrogates for efficient hydrologic modeling: Insights from stochastic simulations of managed aquifer recharge [0.0]
We show that machine learning surrogate models can achieve under 10% mean absolute percentage error.
We apply this workflow to simulations of variably saturated groundwater flow at a prospective managed aquifer recharge site.
ML surrogate models can achieve under 10% mean absolute percentage error and yield order-of-magnitude runtime savings.
arXiv Detail & Related papers (2024-07-30T15:24:27Z) - Integrating Multi-Physics Simulations and Machine Learning to Define the Spatter Mechanism and Process Window in Laser Powder Bed Fusion [6.024307115154315]
In this work, we investigate mechanism of spatter formation, using a high-fidelity modelling tool that was built to simulate the multi-physics phenomena in LPBF.
To understand spatter behavior and formation, we reveal its properties at ejection and evaluate its variation from the meltpool, the source where it is formed.
The relationship between the spatter and the meltpool were evaluated via correlation analysis and machine learning (ML) algorithms for classification tasks.
arXiv Detail & Related papers (2024-05-13T15:08:02Z) - Deep Neural Operator Enabled Digital Twin Modeling for Additive Manufacturing [9.639126204112937]
A digital twin (DT) behaves as a virtual twin of the real-world physical process.
We present a deep neural operator enabled computational framework of the DT for closed-loop feedback control of the L-PBF process.
The developed DT is envisioned to guide the AM process and facilitate high-quality manufacturing.
arXiv Detail & Related papers (2024-05-13T03:53:46Z) - Multi-fidelity surrogate with heterogeneous input spaces for modeling melt pools in laser-directed energy deposition [0.0]
Multi-fidelity (MF) modeling is a powerful statistical approach that can intelligently blend data from varied fidelity sources.
One major challenge in using MF surrogates to merge a hierarchy of melt pool models is the variability in input spaces.
This paper introduces a novel approach for constructing an MF surrogate for predicting melt pool geometry by integrating models of varying complexity.
arXiv Detail & Related papers (2024-03-19T20:12:46Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Scaling Relationship on Learning Mathematical Reasoning with Large
Language Models [75.29595679428105]
We investigate how the pre-training loss, supervised data amount, and augmented data amount influence the reasoning performances of a supervised LLM.
We find that rejection samples from multiple models push LLaMA-7B to an accuracy of 49.3% on GSM8K which outperforms the supervised fine-tuning (SFT) accuracy of 35.9% significantly.
arXiv Detail & Related papers (2023-08-03T15:34:01Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - MeltpoolNet: Melt pool Characteristic Prediction in Metal Additive
Manufacturing Using Machine Learning [0.39577682622066257]
Characterizing meltpool shape and geometry is essential in metal Additive Manufacturing (MAM) to control the printing process and avoid defects.
Machine learning (ML) techniques can be useful in connecting process parameters to the type of flaws in the meltpool.
In this work, we introduced a comprehensive framework for benchmarking ML for melt pool characterization.
arXiv Detail & Related papers (2022-01-26T04:08:56Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.