Multi-plane denoising diffusion-based dimensionality expansion for
2D-to-3D reconstruction of microstructures with harmonized sampling
- URL: http://arxiv.org/abs/2308.14035v2
- Date: Sat, 23 Sep 2023 15:28:10 GMT
- Title: Multi-plane denoising diffusion-based dimensionality expansion for
2D-to-3D reconstruction of microstructures with harmonized sampling
- Authors: Kang-Hyun Lee and Gun Jin Yun
- Abstract summary: This study proposes a novel framework for 2D-to-3D reconstruction of microstructures called Micro3Diff.
Specifically, this approach solely requires pre-trained DGMs for the generation of 2D samples.
A harmonized sampling process is developed to address possible deviations from the reverse Markov chain of DGMs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acquiring reliable microstructure datasets is a pivotal step toward the
systematic design of materials with the aid of integrated computational
materials engineering (ICME) approaches. However, obtaining three-dimensional
(3D) microstructure datasets is often challenging due to high experimental
costs or technical limitations, while acquiring two-dimensional (2D)
micrographs is comparatively easier. To deal with this issue, this study
proposes a novel framework for 2D-to-3D reconstruction of microstructures
called Micro3Diff using diffusion-based generative models (DGMs). Specifically,
this approach solely requires pre-trained DGMs for the generation of 2D
samples, and dimensionality expansion (2D-to-3D) takes place only during the
generation process (i.e., reverse diffusion process). The proposed framework
incorporates a new concept referred to as multi-plane denoising diffusion,
which transforms noisy samples (i.e., latent variables) from different planes
into the data structure while maintaining spatial connectivity in 3D space.
Furthermore, a harmonized sampling process is developed to address possible
deviations from the reverse Markov chain of DGMs during the dimensionality
expansion. Combined, we demonstrate the feasibility of Micro3Diff in
reconstructing 3D samples with connected slices that maintain morphologically
equivalence to the original 2D images. To validate the performance of
Micro3Diff, various types of microstructures (synthetic and experimentally
observed) are reconstructed, and the quality of the generated samples is
assessed both qualitatively and quantitatively. The successful reconstruction
outcomes inspire the potential utilization of Micro3Diff in upcoming ICME
applications while achieving a breakthrough in comprehending and manipulating
the latent space of DGMs.
Related papers
- GeoGS3D: Single-view 3D Reconstruction via Geometric-aware Diffusion Model and Gaussian Splatting [81.03553265684184]
We introduce GeoGS3D, a framework for reconstructing detailed 3D objects from single-view images.
We propose a novel metric, Gaussian Divergence Significance (GDS), to prune unnecessary operations during optimization.
Experiments demonstrate that GeoGS3D generates images with high consistency across views and reconstructs high-quality 3D objects.
arXiv Detail & Related papers (2024-03-15T12:24:36Z) - A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation [4.169915659794567]
The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
arXiv Detail & Related papers (2024-02-24T13:42:34Z) - Denoising diffusion-based synthetic generation of three-dimensional (3D)
anisotropic microstructures from two-dimensional (2D) micrographs [0.0]
We present a framework for reconstruction of anisotropic microstructures based on conditional diffusion-based generative models (DGMs)
The proposed framework involves spatial connection of multiple 2D conditional DGMs, each trained to generate 2D microstructure samples for three different planes.
The results demonstrate that the framework is capable of reproducing not only the statistical distribution of material phases but also the material properties in 3D space.
arXiv Detail & Related papers (2023-12-13T01:36:37Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Using convolutional neural networks for stereological characterization
of 3D hetero-aggregates based on synthetic STEM data [0.0]
A parametric 3D model is presented, from which a wide spectrum of virtual hetero-aggregates can be generated.
The virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images.
Convolutional neural networks are trained to predict 3D structures of hetero-aggregates from 2D STEM images.
arXiv Detail & Related papers (2023-10-27T22:49:08Z) - CryoFormer: Continuous Heterogeneous Cryo-EM Reconstruction using
Transformer-based Neural Representations [49.49939711956354]
Cryo-electron microscopy (cryo-EM) allows for the high-resolution reconstruction of 3D structures of proteins and other biomolecules.
It is still challenging to reconstruct the continuous motions of 3D structures from noisy and randomly oriented 2D cryo-EM images.
We propose CryoFormer, a new approach for continuous heterogeneous cryo-EM reconstruction.
arXiv Detail & Related papers (2023-03-28T18:59:17Z) - MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN [0.0]
3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
arXiv Detail & Related papers (2022-10-12T19:13:28Z) - BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion [85.24673400250671]
We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
arXiv Detail & Related papers (2022-04-03T19:33:09Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.