3D microstructural generation from 2D images of cement paste using generative adversarial networks
- URL: http://arxiv.org/abs/2204.01645v3
- Date: Mon, 18 Nov 2024 02:56:06 GMT
- Title: 3D microstructural generation from 2D images of cement paste using generative adversarial networks
- Authors: Xin Zhao, Lin Wang, Qinfei Li, Heng Chen, Shuangrong Liu, Pengkun Hou, Jiayuan Ye, Yan Pei, Xu Wu, Jianfeng Yuan, Haozhong Gao, Bo Yang,
- Abstract summary: This paper proposes a generative adversarial networks-based method for generating 3D microstructures from a single two-dimensional (2D) image.
In the method, a framework is designed to synthesize 3D images by learning microstructural information from a 2D cross-sectional image.
Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including similar pore distribution and particle morphology.
- Score: 13.746290854403874
- License:
- Abstract: Establishing a realistic three-dimensional (3D) microstructure is a crucial step for studying microstructure development of hardened cement pastes. However, acquiring 3D microstructural images for cement often involves high costs and quality compromises. This paper proposes a generative adversarial networks-based method for generating 3D microstructures from a single two-dimensional (2D) image, capable of producing high-quality and realistic 3D images at low cost. In the method, a framework (CEM3DMG) is designed to synthesize 3D images by learning microstructural information from a 2D cross-sectional image. Experimental results show that CEM3DMG can generate realistic 3D images of large size. Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including similar pore distribution and particle morphology. Furthermore, quantitative analysis reveals that reconstructed 3D microstructures closely match the real 2D microstructure in terms of gray level histogram, phase proportions, and pore size distribution. The source code for CEM3DMG is available in the GitHub repository at: https://github.com/NBICLAB/CEM3DMG.
Related papers
- DuoLift-GAN:Reconstructing CT from Single-view and Biplanar X-Rays with Generative Adversarial Networks [1.3812010983144802]
We introduce DuoLift Generative Adversarial Networks (DuoLift-GAN), a novel architecture with dual branches that independently elevate 2D images and their features into 3D representations.
These 3D outputs are merged into a unified 3D feature map and decoded into a complete 3D chest volume, enabling richer 3D information capture.
arXiv Detail & Related papers (2024-11-12T17:11:18Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN [0.0]
3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
arXiv Detail & Related papers (2022-10-12T19:13:28Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - Weakly Supervised Volumetric Image Segmentation with Deformed Templates [80.04326168716493]
We propose an approach that is truly weakly-supervised in the sense that we only need to provide a sparse set of 3D point on the surface of target objects.
We will show that it outperforms a more traditional approach to weak-supervision in 3D at a reduced supervision cost.
arXiv Detail & Related papers (2021-06-07T22:09:34Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z) - Hard Example Generation by Texture Synthesis for Cross-domain Shape
Similarity Learning [97.56893524594703]
Image-based 3D shape retrieval (IBSR) aims to find the corresponding 3D shape of a given 2D image from a large 3D shape database.
metric learning with some adaptation techniques seems to be a natural solution to shape similarity learning.
We develop a geometry-focused multi-view metric learning framework empowered by texture synthesis.
arXiv Detail & Related papers (2020-10-23T08:52:00Z) - Improved Modeling of 3D Shapes with Multi-view Depth Maps [48.8309897766904]
We present a general-purpose framework for modeling 3D shapes using CNNs.
Using just a single depth image of the object, we can output a dense multi-view depth map representation of 3D objects.
arXiv Detail & Related papers (2020-09-07T17:58:27Z) - 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications [7.449993399792031]
3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
arXiv Detail & Related papers (2020-07-27T21:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.