Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion
- URL: http://arxiv.org/abs/2102.07708v1
- Date: Wed, 10 Feb 2021 18:46:17 GMT
- Title: Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion
- Authors: Steve Kench, Samuel J. Cooper
- Abstract summary: Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial networks (GANs) can be trained to generate 3D image
data, which is useful for design optimisation. However, this conventionally
requires 3D training data, which is challenging to obtain. 2D imaging
techniques tend to be faster, higher resolution, better at phase identification
and more widely available. Here, we introduce a generative adversarial network
architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets
using a single representative 2D image. This is especially relevant for the
task of material microstructure generation, as a cross-sectional micrograph can
contain sufficient information to statistically reconstruct 3D samples. Our
architecture implements the concept of uniform information density, which both
ensures that generated volumes are equally high quality at all points in space,
and that arbitrarily large volumes can be generated. SliceGAN has been
successfully trained on a diverse set of materials, demonstrating the
widespread applicability of this tool. The quality of generated micrographs is
shown through a statistical comparison of synthetic and real datasets of a
battery electrode in terms of key microstructural metrics. Finally, we find
that the generation time for a $10^8$ voxel volume is on the order of a few
seconds, yielding a path for future studies into high-throughput
microstructural optimisation.
Related papers
- GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - Using convolutional neural networks for stereological characterization
of 3D hetero-aggregates based on synthetic STEM data [0.0]
A parametric 3D model is presented, from which a wide spectrum of virtual hetero-aggregates can be generated.
The virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images.
Convolutional neural networks are trained to predict 3D structures of hetero-aggregates from 2D STEM images.
arXiv Detail & Related papers (2023-10-27T22:49:08Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - Neural Progressive Meshes [54.52990060976026]
We propose a method to transmit 3D meshes with a shared learned generative space.
We learn this space using a subdivision-based encoder-decoder architecture trained in advance on a large collection of surfaces.
We evaluate our method on a diverse set of complex 3D shapes and demonstrate that it outperforms baselines in terms of compression ratio and reconstruction quality.
arXiv Detail & Related papers (2023-08-10T17:58:02Z) - GVP: Generative Volumetric Primitives [76.95231302205235]
We present Generative Volumetric Primitives (GVP), the first pure 3D generative model that can sample and render 512-resolution images in real-time.
GVP jointly models a number of primitives and their spatial information, both of which can be efficiently generated via a 2D convolutional network.
Experiments on several datasets demonstrate superior efficiency and 3D consistency of GVP over the state-of-the-art.
arXiv Detail & Related papers (2023-03-31T16:50:23Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN [0.0]
3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
arXiv Detail & Related papers (2022-10-12T19:13:28Z) - Improved $\alpha$-GAN architecture for generating 3D connected volumes
with an application to radiosurgery treatment planning [0.5156484100374059]
We propose an improved version of 3D $alpha$-GAN for generating connected 3D volumes.
Our model can successfully generate high-quality 3D tumor volumes and associated treatment specifications.
The capability of improved 3D $alpha$-GAN makes it a valuable source for generating synthetic medical image data.
arXiv Detail & Related papers (2022-07-13T16:39:47Z) - Super-resolution of multiphase materials by combining complementary 2D
and 3D image data using generative adversarial networks [0.0]
We present a method for combining information from pairs of distinct but complementary imaging techniques.
Specifically, we use deep convolutional generative adversarial networks to implement super-resolution, style transfer and dimensionality expansion.
Having confidence in the accuracy of our method, we then demonstrate its power by applying to a real data pair from a lithium ion battery electrode.
arXiv Detail & Related papers (2021-10-21T17:07:57Z) - 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications [7.449993399792031]
3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
arXiv Detail & Related papers (2020-07-27T21:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.