A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation
- URL: http://arxiv.org/abs/2402.15815v1
- Date: Sat, 24 Feb 2024 13:42:34 GMT
- Title: A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation
- Authors: Yilin Zheng and Zhigong Song
- Abstract summary: The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
- Score: 4.169915659794567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The reconstruction of 3D microstructures from 2D slices is considered to hold
significant value in predicting the spatial structure and physical properties
of materials.The dimensional extension from 2D to 3D is viewed as a highly
challenging inverse problem from the current technological
perspective.Recently,methods based on generative adversarial networks have
garnered widespread attention.However,they are still hampered by numerous
limitations,including oversimplified models,a requirement for a substantial
number of training samples,and difficulties in achieving model convergence
during training.In light of this,a novel generative model that integrates the
multiscale properties of U-net with and the generative capabilities of GAN has
been proposed.Based on this,the innovative construction of a multi-scale
channel aggregation module,a multi-scale hierarchical feature aggregation
module and a convolutional block attention mechanism can better capture the
properties of the material microstructure and extract the image information.The
model's accuracy is further improved by combining the image regularization loss
with the Wasserstein distance loss.In addition,this study utilizes the
anisotropy index to accurately distinguish the nature of the image,which can
clearly determine the isotropy and anisotropy of the image.It is also the first
time that the generation quality of material samples from different domains is
evaluated and the performance of the model itself is compared.The experimental
results demonstrate that the present model not only shows a very high
similarity between the generated 3D structures and real samples but is also
highly consistent with real data in terms of statistical data analysis.
Related papers
- MVGamba: Unify 3D Content Generation as State Space Sequence Modeling [150.80564081817786]
We introduce MVGamba, a general and lightweight Gaussian reconstruction model featuring a multi-view Gaussian reconstructor.
With off-the-detail multi-view diffusion models integrated, MVGamba unifies 3D generation tasks from a single image, sparse images, or text prompts.
Experiments demonstrate that MVGamba outperforms state-of-the-art baselines in all 3D content generation scenarios with approximately only $0.1times$ of the model size.
arXiv Detail & Related papers (2024-06-10T15:26:48Z) - InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models [66.83681825842135]
InstantMesh is a feed-forward framework for instant 3D mesh generation from a single image.
It features state-of-the-art generation quality and significant training scalability.
We release all the code, weights, and demo of InstantMesh with the intention that it can make substantial contributions to the community of 3D generative AI.
arXiv Detail & Related papers (2024-04-10T17:48:37Z) - Denoising diffusion-based synthetic generation of three-dimensional (3D)
anisotropic microstructures from two-dimensional (2D) micrographs [0.0]
We present a framework for reconstruction of anisotropic microstructures based on conditional diffusion-based generative models (DGMs)
The proposed framework involves spatial connection of multiple 2D conditional DGMs, each trained to generate 2D microstructure samples for three different planes.
The results demonstrate that the framework is capable of reproducing not only the statistical distribution of material phases but also the material properties in 3D space.
arXiv Detail & Related papers (2023-12-13T01:36:37Z) - Using convolutional neural networks for stereological characterization
of 3D hetero-aggregates based on synthetic STEM data [0.0]
A parametric 3D model is presented, from which a wide spectrum of virtual hetero-aggregates can be generated.
The virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images.
Convolutional neural networks are trained to predict 3D structures of hetero-aggregates from 2D STEM images.
arXiv Detail & Related papers (2023-10-27T22:49:08Z) - Multi-plane denoising diffusion-based dimensionality expansion for
2D-to-3D reconstruction of microstructures with harmonized sampling [0.0]
This study proposes a novel framework for 2D-to-3D reconstruction of microstructures called Micro3Diff.
Specifically, this approach solely requires pre-trained DGMs for the generation of 2D samples.
A harmonized sampling process is developed to address possible deviations from the reverse Markov chain of DGMs.
arXiv Detail & Related papers (2023-08-27T07:57:25Z) - Reference-Free Isotropic 3D EM Reconstruction using Diffusion Models [8.590026259176806]
We propose a diffusion-model-based framework that overcomes the limitations of requiring reference data or prior knowledge about the degradation process.
Our approach utilizes 2D diffusion models to consistently reconstruct 3D volumes and is well-suited for highly downsampled data.
arXiv Detail & Related papers (2023-08-03T07:57:02Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Physical model simulator-trained neural network for computational 3D
phase imaging of multiple-scattering samples [1.112751058850223]
We develop a new model-based data normalization pre-processing procedure for homogenizing the sample contrast.
We demonstrate this framework's capabilities on experimental measurements of epithelial buccal cells and Caenorhabditis elegans worms.
arXiv Detail & Related papers (2021-03-29T17:43:56Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.