Three-dimensional Microstructural Image Synthesis from 2D Backscattered Electron Image of Cement Paste
- URL: http://arxiv.org/abs/2204.01645v2
- Date: Thu, 11 Jul 2024 05:49:21 GMT
- Title: Three-dimensional Microstructural Image Synthesis from 2D Backscattered Electron Image of Cement Paste
- Authors: Xin Zhao, Lin Wang, Qinfei Li, Heng Chen, Shuangrong Liu, Pengkun Hou, Xu Wu, Jianfeng Yuan, Haozhong Gao, Bo Yang,
- Abstract summary: A framework (CEM3DMG) is designed to synthesize 3D images by learning microstructural information from a 2D backscattered electron (BSE) image.
Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including pores and particles morphology.
- Score: 10.632881687161762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a deep learning-based method for generating 3D microstructures from a single two-dimensional (2D) image, capable of producing high-quality, realistic 3D images at low cost. In the method, a framework (CEM3DMG) is designed to synthesize 3D images by learning microstructural information from a 2D backscattered electron (BSE) image. Experimental results show that CEM3DMG can generate realistic 3D images of arbitrary size with a resolution of 0.47 $\mu m$ per pixel. Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including pores and particles morphology. Furthermore, quantitative analysis reveals that these 3D microstructures closely match the real 2D microstructure in terms of gray level histogram, phase proportions, and pore size distribution.
Related papers
- GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - What You See is What You GAN: Rendering Every Pixel for High-Fidelity
Geometry in 3D GANs [82.3936309001633]
3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries.
Yet, the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution.
We propose techniques to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail.
arXiv Detail & Related papers (2024-01-04T18:50:38Z) - Likelihood-Based Generative Radiance Field with Latent Space
Energy-Based Model for 3D-Aware Disentangled Image Representation [43.41596483002523]
We propose a likelihood-based top-down 3D-aware 2D image generative model that incorporates 3D representation via Neural Radiance Fields (NeRF) and 2D imaging process via differentiable volume rendering.
Experiments on several benchmark datasets demonstrate that the NeRF-LEBM can infer 3D object structures from 2D images, generate 2D images with novel views and objects, learn from incomplete 2D images, and learn from 2D images with known or unknown camera poses.
arXiv Detail & Related papers (2023-04-16T23:44:41Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN [0.0]
3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
arXiv Detail & Related papers (2022-10-12T19:13:28Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - Clean Implicit 3D Structure from Noisy 2D STEM Images [19.04251929587417]
We show that a differentiable image formation model for STEM can learn a joint model of 2D sensor noise in STEM together with an implicit 3D model.
We show, that the combination of these models are able to successfully disentangle 3D signal and noise without supervision and outperform at the same time several baselines on synthetic and real data.
arXiv Detail & Related papers (2022-03-29T11:00:28Z) - Accelerate 3D Object Processing via Spectral Layout [1.52292571922932]
We propose to embed the essential information in a 3D object into 2D space via spectral layout.
The proposed method can achieve high quality 2D representations for 3D objects, which enables to use 2D-based methods to process 3D objects.
arXiv Detail & Related papers (2021-10-25T03:18:37Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.