Learning two-phase microstructure evolution using neural operators and
autoencoder architectures
- URL: http://arxiv.org/abs/2204.07230v1
- Date: Mon, 11 Apr 2022 18:34:59 GMT
- Title: Learning two-phase microstructure evolution using neural operators and
autoencoder architectures
- Authors: Vivek Oommen, Khemraj Shukla, Somdatta Goswami, Remi Dingreville,
George Em Karniadakis
- Abstract summary: We develop a new framework that integrates a convolutional autoencoder architecture with a deep neural operator (DeepONet)
DeepONet learns the mesoscale dynamics of the microstructure evolution in the latent space.
The result is an efficient and accurate accelerated phase-field framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phase-field modeling is an effective mesoscale method for capturing the
evolution dynamics of materials, e.g., in spinodal decomposition of a two-phase
mixture. However, the accuracy of high-fidelity phase field models comes at a
substantial computational cost. Hence, fast and generalizable surrogate models
are needed to alleviate the cost in computationally taxing processes such as in
optimization and design of materials. The intrinsic discontinuous nature of the
physical phenomena incurred by the presence of sharp phase boundaries makes the
training of the surrogate model cumbersome. We develop a new framework that
integrates a convolutional autoencoder architecture with a deep neural operator
(DeepONet) to learn the dynamic evolution of a two-phase mixture. We utilize
the convolutional autoencoder to provide a compact representation of the
microstructure data in a low-dimensional latent space. DeepONet, which consists
of two sub-networks, one for encoding the input function at a fixed number of
sensors locations (branch net) and another for encoding the locations for the
output functions (trunk net), learns the mesoscale dynamics of the
microstructure evolution in the latent space. The decoder part of the
convolutional autoencoder can then reconstruct the time-evolved microstructure
from the DeepONet predictions. The result is an efficient and accurate
accelerated phase-field framework that outperforms other neural-network-based
approaches while at the same time being robust to noisy inputs.
Related papers
- Predicting Transonic Flowfields in Non-Homogeneous Unstructured Grids Using Autoencoder Graph Convolutional Networks [0.0]
This paper focuses on addressing challenges posed by non-homogeneous unstructured grids, commonly used in Computational Fluid Dynamics (CFD)
The core of our approach centers on geometric deep learning, specifically the utilization of graph convolutional network (GCN)
The novel Autoencoder GCN architecture enhances prediction accuracy by propagating information to distant nodes and emphasizing influential points.
arXiv Detail & Related papers (2024-05-07T15:18:21Z) - Symplectic Autoencoders for Model Reduction of Hamiltonian Systems [0.0]
It is crucial to preserve the symplectic structure associated with the system in order to ensure long-term numerical stability.
We propose a new neural network architecture in the spirit of autoencoders, which are established tools for dimension reduction.
In order to train the network, a non-standard gradient descent approach is applied.
arXiv Detail & Related papers (2023-12-15T18:20:25Z) - Dynamic Encoding and Decoding of Information for Split Learning in
Mobile-Edge Computing: Leveraging Information Bottleneck Theory [1.1151919978983582]
Split learning is a privacy-preserving distributed learning paradigm in which an ML model is split into two parts (i.e., an encoder and a decoder)
In mobile-edge computing, network functions can be trained via split learning where an encoder resides in a user equipment (UE) and a decoder resides in the edge network.
We present a new framework and training mechanism to enable a dynamic balancing of the transmission resource consumption with the informativeness of the shared latent representations.
arXiv Detail & Related papers (2023-09-06T07:04:37Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Multiscale Graph Neural Network Autoencoders for Interpretable
Scientific Machine Learning [0.0]
The goal of this work is to address two limitations in autoencoder-based models: latent space interpretability and compatibility with unstructured meshes.
This is accomplished here with the development of a novel graph neural network (GNN) autoencoding architecture with demonstrations on complex fluid flow applications.
arXiv Detail & Related papers (2023-02-13T08:47:11Z) - Machine Learning model for gas-liquid interface reconstruction in CFD
numerical simulations [59.84561168501493]
The volume of fluid (VoF) method is widely used in multi-phase flow simulations to track and locate the interface between two immiscible fluids.
A major bottleneck of the VoF method is the interface reconstruction step due to its high computational cost and low accuracy on unstructured grids.
We propose a machine learning enhanced VoF method based on Graph Neural Networks (GNN) to accelerate the interface reconstruction on general unstructured meshes.
arXiv Detail & Related papers (2022-07-12T17:07:46Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Neural Architecture Optimization with Graph VAE [21.126140965779534]
We propose an efficient NAS approach to optimize network architectures in a continuous space.
The framework jointly learns four components: the encoder, the performance predictor, the complexity predictor and the decoder.
arXiv Detail & Related papers (2020-06-18T07:05:48Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.