Learning two-phase microstructure evolution using neural operators and
autoencoder architectures
- URL: http://arxiv.org/abs/2204.07230v1
- Date: Mon, 11 Apr 2022 18:34:59 GMT
- Title: Learning two-phase microstructure evolution using neural operators and
autoencoder architectures
- Authors: Vivek Oommen, Khemraj Shukla, Somdatta Goswami, Remi Dingreville,
George Em Karniadakis
- Abstract summary: We develop a new framework that integrates a convolutional autoencoder architecture with a deep neural operator (DeepONet)
DeepONet learns the mesoscale dynamics of the microstructure evolution in the latent space.
The result is an efficient and accurate accelerated phase-field framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phase-field modeling is an effective mesoscale method for capturing the
evolution dynamics of materials, e.g., in spinodal decomposition of a two-phase
mixture. However, the accuracy of high-fidelity phase field models comes at a
substantial computational cost. Hence, fast and generalizable surrogate models
are needed to alleviate the cost in computationally taxing processes such as in
optimization and design of materials. The intrinsic discontinuous nature of the
physical phenomena incurred by the presence of sharp phase boundaries makes the
training of the surrogate model cumbersome. We develop a new framework that
integrates a convolutional autoencoder architecture with a deep neural operator
(DeepONet) to learn the dynamic evolution of a two-phase mixture. We utilize
the convolutional autoencoder to provide a compact representation of the
microstructure data in a low-dimensional latent space. DeepONet, which consists
of two sub-networks, one for encoding the input function at a fixed number of
sensors locations (branch net) and another for encoding the locations for the
output functions (trunk net), learns the mesoscale dynamics of the
microstructure evolution in the latent space. The decoder part of the
convolutional autoencoder can then reconstruct the time-evolved microstructure
from the DeepONet predictions. The result is an efficient and accurate
accelerated phase-field framework that outperforms other neural-network-based
approaches while at the same time being robust to noisy inputs.
Related papers
- Deep Learning-Driven Prediction of Microstructure Evolution via Latent Space Interpolation [0.0]
Phase-field models accurately simulate microstructure evolution, but their dependence on solving complex differential equations makes them computationally expensive.<n>This work achieves a significant acceleration via a novel deep learning-based framework, utilizing a Variational Autoencoder (CVAE) coupled with Cubic Spline Interpolation and Spherical Linear Interpolation (SLERP)<n>We demonstrate the method for binary spinodal decomposition by predicting microstructure evolution for intermediate alloy compositions from a limited set of training compositions.
arXiv Detail & Related papers (2025-08-03T16:22:15Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Uncovering Magnetic Phases with Synthetic Data and Physics-Informed Training [0.0]
We investigate the efficient learning of magnetic phases using artificial neural networks trained on synthetic data.<n>We incorporate two key forms of physics-informed guidance to enhance model performance.<n>Our results show that synthetic, structured, and computationally efficient training schemes can reveal physically meaningful phase boundaries.
arXiv Detail & Related papers (2025-05-15T15:16:16Z) - Making Neural Networks More Suitable for Approximate Clifford+T Circuit Synthesis [0.7449724123186384]
We develop deep learning techniques that improve performance on reinforcement learning guided quantum circuit synthesis.
We show how augmenting data with small random unitary perturbations during training enables more robust learning.
We also show how encoding numerical data with techniques from image processing allow networks to better detect small but significant changes in data.
arXiv Detail & Related papers (2025-04-22T15:51:32Z) - Teaching Artificial Intelligence to Perform Rapid, Resolution-Invariant Grain Growth Modeling via Fourier Neural Operator [0.0]
Microstructural evolution plays a critical role in shaping the physical, optical, and electronic properties of materials.
Traditional phase-field modeling accurately simulates these phenomena but is computationally intensive.
This study introduces a novel approach utilizing Fourier Neural Operator (FNO) to achieve resolution-invariant modeling.
arXiv Detail & Related papers (2025-03-18T11:19:08Z) - Multiscale Analysis of Woven Composites Using Hierarchical Physically Recurrent Neural Networks [0.0]
Multiscale homogenization of woven composites requires detailed micromechanical evaluations.
This study introduces a Hierarchical Physically Recurrent Neural Network (HPRNN) employing two levels of surrogate modeling.
arXiv Detail & Related papers (2025-03-06T19:02:32Z) - Predicting Transonic Flowfields in Non-Homogeneous Unstructured Grids Using Autoencoder Graph Convolutional Networks [0.0]
This paper focuses on addressing challenges posed by non-homogeneous unstructured grids, commonly used in Computational Fluid Dynamics (CFD)
The core of our approach centers on geometric deep learning, specifically the utilization of graph convolutional network (GCN)
The novel Autoencoder GCN architecture enhances prediction accuracy by propagating information to distant nodes and emphasizing influential points.
arXiv Detail & Related papers (2024-05-07T15:18:21Z) - Symplectic Autoencoders for Model Reduction of Hamiltonian Systems [0.0]
It is crucial to preserve the symplectic structure associated with the system in order to ensure long-term numerical stability.
We propose a new neural network architecture in the spirit of autoencoders, which are established tools for dimension reduction.
In order to train the network, a non-standard gradient descent approach is applied.
arXiv Detail & Related papers (2023-12-15T18:20:25Z) - Dynamic Encoding and Decoding of Information for Split Learning in
Mobile-Edge Computing: Leveraging Information Bottleneck Theory [1.1151919978983582]
Split learning is a privacy-preserving distributed learning paradigm in which an ML model is split into two parts (i.e., an encoder and a decoder)
In mobile-edge computing, network functions can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network.
We present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations.
arXiv Detail & Related papers (2023-09-06T07:04:37Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Multiscale Graph Neural Network Autoencoders for Interpretable
Scientific Machine Learning [0.0]
The goal of this work is to address two limitations in autoencoder-based models: latent space interpretability and compatibility with unstructured meshes.
This is accomplished here with the development of a novel graph neural network (GNN) autoencoding architecture with demonstrations on complex fluid flow applications.
arXiv Detail & Related papers (2023-02-13T08:47:11Z) - Machine Learning model for gas-liquid interface reconstruction in CFD
numerical simulations [59.84561168501493]
The volume of fluid (VoF) method is widely used in multi-phase flow simulations to track and locate the interface between two immiscible fluids.
A major bottleneck of the VoF method is the interface reconstruction step due to its high computational cost and low accuracy on unstructured grids.
We propose a machine learning enhanced VoF method based on Graph Neural Networks (GNN) to accelerate the interface reconstruction on general unstructured meshes.
arXiv Detail & Related papers (2022-07-12T17:07:46Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Neural Architecture Optimization with Graph VAE [21.126140965779534]
We propose an efficient NAS approach to optimize network architectures in a continuous space.
The framework jointly learns four components: the encoder, the performance predictor, the complexity predictor and the decoder.
arXiv Detail & Related papers (2020-06-18T07:05:48Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.