3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications
- URL: http://arxiv.org/abs/2007.13887v1
- Date: Mon, 27 Jul 2020 21:55:16 GMT
- Title: 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications
- Authors: Devendra K. Jangid, Neal R. Brodnik, Amil Khan, McLean P. Echlin,
Tresa M. Pollock, Sam Daly, B. S. Manjunath
- Abstract summary: 3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
- Score: 7.449993399792031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of computer vision, unsupervised learning for 2D object
generation has advanced rapidly in the past few years. However, 3D object
generation has not garnered the same attention or success as its predecessor.
To facilitate novel progress at the intersection of computer vision and
materials science, we propose a 3DMaterialGAN network that is capable of
recognizing and synthesizing individual grains whose morphology conforms to a
given 3D polycrystalline material microstructure. This Generative Adversarial
Network (GAN) architecture yields complex 3D objects from probabilistic latent
space vectors with no additional information from 2D rendered images. We show
that this method performs comparably or better than state-of-the-art on
benchmark annotated 3D datasets, while also being able to distinguish and
generate objects that are not easily annotated, such as grain morphologies. The
value of our algorithm is demonstrated with analysis on experimental real-world
data, namely generating 3D grain structures found in a commercially relevant
wrought titanium alloy, which were validated through statistical shape
comparison. This framework lays the foundation for the recognition and
synthesis of polycrystalline material microstructures, which are used in
additive manufacturing, aerospace, and structural design applications.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - A Generative Machine Learning Model for Material Microstructure 3D
Reconstruction and Performance Evaluation [4.169915659794567]
The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.
A novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.
The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.
arXiv Detail & Related papers (2024-02-24T13:42:34Z) - MinD-3D: Reconstruct High-quality 3D objects in Human Brain [50.534007259536715]
Recon3DMind is an innovative task aimed at reconstructing 3D visuals from Functional Magnetic Resonance Imaging (fMRI) signals.
We present the fMRI-Shape dataset, which includes data from 14 participants and features 360-degree videos of 3D objects.
We propose MinD-3D, a novel and effective three-stage framework specifically designed to decode the brain's 3D visual information from fMRI signals.
arXiv Detail & Related papers (2023-12-12T18:21:36Z) - Using convolutional neural networks for stereological characterization
of 3D hetero-aggregates based on synthetic STEM data [0.0]
A parametric 3D model is presented, from which a wide spectrum of virtual hetero-aggregates can be generated.
The virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images.
Convolutional neural networks are trained to predict 3D structures of hetero-aggregates from 2D STEM images.
arXiv Detail & Related papers (2023-10-27T22:49:08Z) - AutoDecoding Latent 3D Diffusion Models [95.7279510847827]
We present a novel approach to the generation of static and articulated 3D assets that has a 3D autodecoder at its core.
The 3D autodecoder framework embeds properties learned from the target dataset in the latent space.
We then identify the appropriate intermediate volumetric latent space, and introduce robust normalization and de-normalization operations.
arXiv Detail & Related papers (2023-07-07T17:59:14Z) - MicroLib: A library of 3D microstructures generated from 2D micrographs
using SliceGAN [0.0]
3D microstructural datasets are commonly used to define the geometrical domains used in finite element modelling.
Machine learning method, SliceGAN, was developed to statistically generate 3D microstructural datasets of arbitrary size.
We present the results from applying SliceGAN to 87 different microstructures, ranging from biological materials to high-strength steels.
arXiv Detail & Related papers (2022-10-12T19:13:28Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - 3D microstructural generation from 2D images of cement paste using generative adversarial networks [13.746290854403874]
This paper proposes a generative adversarial networks-based method for generating 3D microstructures from a single two-dimensional (2D) image.
In the method, a framework is designed to synthesize 3D images by learning microstructural information from a 2D cross-sectional image.
Visual observation confirms that the generated 3D images exhibit similar microstructural features to the 2D images, including similar pore distribution and particle morphology.
arXiv Detail & Related papers (2022-04-04T16:50:03Z) - Generating 3D structures from a 2D slice with GAN-based dimensionality
expansion [0.0]
Generative adversarial networks (GANs) can be trained to generate 3D image data, which is useful for design optimisation.
We introduce a generative adversarial network architecture, SliceGAN, which is able to synthesise high fidelity 3D datasets using a single representative 2D image.
arXiv Detail & Related papers (2021-02-10T18:46:17Z) - Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from
a Single RGB Image [102.44347847154867]
We propose a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives.
Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives.
Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.
arXiv Detail & Related papers (2020-04-02T17:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.