High-fidelity Grain Growth Modeling: Leveraging Deep Learning for Fast Computations
- URL: http://arxiv.org/abs/2505.05354v1
- Date: Thu, 08 May 2025 15:43:40 GMT
- Title: High-fidelity Grain Growth Modeling: Leveraging Deep Learning for Fast Computations
- Authors: Pungponhavoan Tep, Marc Bernacki,
- Abstract summary: We introduce a machine learning framework that combines a Convolutional Long Short-Term Memory networks with an Autoencoder to efficiently predict grain growth evolution.<n>Results demonstrated that our machine learning approach accelerates grain growth prediction by up to SI89times faster.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grain growth simulation is crucial for predicting metallic material microstructure evolution during annealing and resulting final mechanical properties, but traditional partial differential equation-based methods are computationally expensive, creating bottlenecks in materials design and manufacturing. In this work, we introduce a machine learning framework that combines a Convolutional Long Short-Term Memory networks with an Autoencoder to efficiently predict grain growth evolution. Our approach captures both spatial and temporal aspects of grain evolution while encoding high-dimensional grain structure data into a compact latent space for pattern learning, enhanced by a novel composite loss function combining Mean Squared Error, Structural Similarity Index Measurement, and Boundary Preservation to maintain structural integrity of grain boundary topology of the prediction. Results demonstrated that our machine learning approach accelerates grain growth prediction by up to \SI{89}{\times} faster, reducing computation time from \SI{10}{\minute} to approximately \SI{10}{\second} while maintaining high-fidelity predictions. The best model (S-30-30) achieving a structural similarity score of \SI{86.71}{\percent} and mean grain size error of just \SI{0.07}{\percent}. All models accurately captured grain boundary topology, morphology, and size distributions. This approach enables rapid microstructural prediction for applications where conventional simulations are prohibitively time-consuming, potentially accelerating innovation in materials science and manufacturing.
Related papers
- Fully Convolutional Spatiotemporal Learning for Microstructure Evolution Prediction [0.5437050212139087]
Traditional simulation methods are expensive due to the need to solve complex partial differential equations at fine resolutions.<n>We propose a deep learning framework that governs microstructural evolution predictions while maintaining high accuracy.<n>Compared to recurrent neural architectures, our model state-of-the-art predictive performance with significantly reduced computational cost in both training and inference.
arXiv Detail & Related papers (2026-02-23T14:55:28Z) - Scalable Spatio-Temporal SE(3) Diffusion for Long-Horizon Protein Dynamics [51.85385061275941]
Molecular dynamics (MD) simulations remain the gold standard for studying protein dynamics.<n>Recent generative models have shown promise in accelerating simulations, yet they struggle with long-horizon generation.<n>We present STAR-MD, a scalable diffusion model that generates physically plausible protein trajectories over micro-scale timescales.
arXiv Detail & Related papers (2026-02-02T14:13:28Z) - Demystifying Data-Driven Probabilistic Medium-Range Weather Forecasting [63.8116386935854]
We demonstrate that state-of-the-art probabilistic skill requires neither intricate architectural constraints nor specialized trainings.<n>We introduce a scalable framework for learning multi-scale atmospheric dynamics by combining a directly downsampled latent space with a history-conditioned local projector.<n>We find that our framework design is robust to the choice of probabilistic estimators, seamlessly supporting interpolants, diffusion models, and CRPS-based ensemble training.
arXiv Detail & Related papers (2026-01-26T03:52:16Z) - Machine-learning-enabled interpretation of tribological deformation patterns in large-scale MD data [0.0]
Grain-orientation-colored computational tomograph pictures obtained from CuNi alloy simulations were first compressed through an autoencoder to a 32-dimensional global feature vector.<n>The reconstructed images retained the essential microstructural motifs: grain boundaries, stacking faults, twins, and partial lattice rotations, while omitting only the finest defects.<n>A CNN-MLP model to predict the dominant deformation pattern achieves a prediction accuracy of approximately 96% on validation data.
arXiv Detail & Related papers (2025-12-05T15:39:13Z) - Scaling Kinetic Monte-Carlo Simulations of Grain Growth with Combined Convolutional and Graph Neural Networks [3.4003175225909015]
Graph neural networks (GNN) have emerged as a promising machine learning method for simulating simulations such as grain growth.<n>We propose a hybrid architecture combining a convolutional neural network (CNN) based autoencoder to compress spatial dimensions, and a GNN that evolves the microstructure in the latent space.<n>Results demonstrate that the new design significantly reduces computational costs with using fewer message passing layer.
arXiv Detail & Related papers (2025-11-22T00:18:03Z) - Predicting Grain Growth in Polycrystalline Materials Using Deep Learning Time Series Models [0.9558392439655014]
Grain Growth strongly influences the mechanical behavior of materials, making its prediction a key objective in microstructural engineering.<n>In this study, several deep learning approaches were evaluated, including recurrent neural networks (RNN), long short-term memory (LSTM), temporal convolutional networks (TCN), and transformers.<n>The LSTM network achieved the highest accuracy (above 90%) and the most stable performance, maintaining physically consistent predictions over extended horizons.
arXiv Detail & Related papers (2025-11-07T18:29:42Z) - Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks [59.552873049024775]
We show that compute-optimally trained models exhibit a remarkably precise universality.<n>With learning rate decay, the collapse becomes so tight that differences in the normalized curves across models fall below the noise floor.<n>We explain these phenomena by connecting collapse to the power-law structure in typical neural scaling laws.
arXiv Detail & Related papers (2025-07-02T20:03:34Z) - Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant Grain Growth Modeling via Fourier Neural Operator [0.0]
Microstructural evolution plays a critical role in shaping the physical, optical, and electronic properties of materials.<n>Traditional phase-field modeling accurately simulates these phenomena but is computationally intensive.<n>This study introduces a novel approach utilizing Fourier Neural Operator (FNO) to achieve resolution-invariant modeling.
arXiv Detail & Related papers (2025-03-18T11:19:08Z) - Hybrid machine learning based scale bridging framework for permeability prediction of fibrous structures [0.0]
This study introduces a hybrid machine learning-based scale-bridging framework for predicting the permeability of fibrous textile structures.<n>Four methodologies were evaluated: Single Scale Method (SSM), Simple Upscaling Method (SUM), Scale-Bridging Method (SBM), and Fully Resolved Model (FRM)
arXiv Detail & Related papers (2025-02-07T16:09:25Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Data-free Weight Compress and Denoise for Large Language Models [96.68582094536032]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.<n>We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Physics-Enhanced TinyML for Real-Time Detection of Ground Magnetic
Anomalies [0.0]
Space weather phenomena like geomagnetic disturbances (GMDs) pose significant risks to critical technological infrastructure.
This paper develops a physics-guided TinyML framework to address the above challenges.
It integrates physics-based regularization at the stages of model training and compression, thereby augmenting the reliability of predictions.
arXiv Detail & Related papers (2023-11-19T23:20:16Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - Neural Network Accelerated Process Design of Polycrystalline
Microstructures [23.897115046430635]
We develop a neural network (NN)-based method with physics-infused constraints to predict microstructural evolution.
In this study, our NN-based method is applied to maximize the homogenized stiffness of a Copper microstructure.
It is found to be 686 times faster while achieving 0.053% error in the resulting homogenized stiffness compared to the traditional finite element simulator.
arXiv Detail & Related papers (2023-04-11T20:35:29Z) - Deep-learning-based prediction of nanoparticle phase transitions during
in situ transmission electron microscopy [3.613625739845355]
We train deep learning models to predict a sequence of future video frames based on the input of a sequence of previous frames.
This capability provides insight into size dependent structural changes in Au nanoparticles under dynamic reaction condition.
It may be possible to anticipate the next steps of a chemical reaction for emerging automated experimentation platforms.
arXiv Detail & Related papers (2022-05-23T15:50:24Z) - Graph convolutional network for predicting abnormal grain growth in Monte Carlo simulations of microstructural evolution [0.0]
We generate a large dataset of Monte Carlo simulations of abnormal grain growth.
We train simple graph convolution networks to predict which initial microstructures will exhibit abnormal grain growth.
The graph neural network outperformed the computer vision method and achieved 73% prediction accuracy and fewer false positives.
arXiv Detail & Related papers (2021-10-18T13:50:43Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - GeoMol: Torsional Geometric Generation of Molecular 3D Conformer
Ensembles [60.12186997181117]
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
Existing generative models have several drawbacks including lack of modeling important molecular geometry elements.
We propose GeoMol, an end-to-end, non-autoregressive and SE(3)-invariant machine learning approach to generate 3D conformers.
arXiv Detail & Related papers (2021-06-08T14:17:59Z) - Learning Output Embeddings in Structured Prediction [73.99064151691597]
A powerful and flexible approach to structured prediction consists in embedding the structured objects to be predicted into a feature space of possibly infinite dimension.
A prediction in the original space is computed by solving a pre-image problem.
In this work, we propose to jointly learn a finite approximation of the output embedding and the regression function into the new feature space.
arXiv Detail & Related papers (2020-07-29T09:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.