MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN
- URL: http://arxiv.org/abs/2210.06541v1
- Date: Wed, 12 Oct 2022 19:13:28 GMT
- Title: MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN
- Authors: Steve Kench, Isaac Squires, Amir Dahari, Samuel J Cooper
- Abstract summary: 3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D microstructural datasets are commonly used to define the geometrical
domains used in finite element modelling. This has proven a useful tool for
understanding how complex material systems behave under applied stresses,
temperatures and chemical conditions. However, 3D imaging of materials is
challenging for a number of reasons, including limited field of view, low
resolution and difficult sample preparation. Recently, a machine learning
method, SliceGAN, was developed to statistically generate 3D microstructural
datasets of arbitrary size using a single 2D input slice as training data. In
this paper, we present the results from applying SliceGAN to 87 different
microstructures, ranging from biological materials to high-strength steels. To
demonstrate the accuracy of the synthetic volumes created by SliceGAN, we
compare three microstructural properties between the 2D training data and 3D
generations, which show good agreement. This new microstructure library both
provides valuable 3D microstructures that can be used in models, and also
demonstrates the broad applicability of the SliceGAN algorithm.
Related papers
- GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation [4.169915659794567]
The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
arXiv Detail & Related papers (2024-02-24T13:42:34Z) - Multi-plane denoising diffusion-based dimensionality expansion for
2D-to-3D reconstruction of microstructures with harmonized sampling [0.0]
This study proposes a novel framework for 2D-to-3D reconstruction of microstructures called Micro3Diff.
Specifically, this approach solely requires pre-trained DGMs for the generation of 2D samples.
A harmonized sampling process is developed to address possible deviations from the reverse Markov chain of DGMs.
arXiv Detail & Related papers (2023-08-27T07:57:25Z) - Automated 3D Pre-Training for Molecular Property Prediction [54.15788181794094]
We propose a novel 3D pre-training framework (dubbed 3D PGT)
It pre-trains a model on 3D molecular graphs, and then fine-tunes it on molecular graphs without 3D structures.
Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT.
arXiv Detail & Related papers (2023-06-13T14:43:13Z) - Three-dimensional microstructure generation using generative adversarial
neural networks in the context of continuum micromechanics [77.34726150561087]
This work proposes a generative adversarial network tailored towards three-dimensional microstructure generation.
The lightweight algorithm is able to learn the underlying properties of the material from a single microCT-scan without the need of explicit descriptors.
arXiv Detail & Related papers (2022-05-31T13:26:51Z) - Three-dimensional Microstructural Image Synthesis from 2D Backscattered Electron Image of Cement Paste [10.632881687161762]
A framework (CEM3DMG) is designed to synthesize 3D images by learning microstructural information from a 2D backscattered electron (BSE) image.
Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including pores and particles morphology.
arXiv Detail & Related papers (2022-04-04T16:50:03Z) - 3D Reconstruction of Curvilinear Structures with Stereo Matching
DeepConvolutional Neural Networks [52.710012864395246]
We propose a fully automated pipeline for both detection and matching of curvilinear structures in stereo pairs.
We mainly focus on 3D reconstruction of dislocations from stereo pairs of TEM images.
arXiv Detail & Related papers (2021-10-14T23:05:47Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z) - Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis
and Analysis [143.22192229456306]
This paper proposes a deep 3D energy-based model to represent volumetric shapes.
The benefits of the proposed model are six-fold.
Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns.
arXiv Detail & Related papers (2020-12-25T06:09:36Z) - 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications [7.449993399792031]
3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
arXiv Detail & Related papers (2020-07-27T21:55:16Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.