Learning geometry-image representation for 3D point cloud generation
- URL: http://arxiv.org/abs/2011.14289v1
- Date: Sun, 29 Nov 2020 05:21:10 GMT
- Title: Learning geometry-image representation for 3D point cloud generation
- Authors: Lei Wang, Yuchun Huang, Pengjie Tao, Yaolin Hou, Yuxuan Liu
- Abstract summary: We propose a novel geometry image based generator (GIG) to convert the 3D point cloud generation problem to a 2D geometry image generation problem.
Experiments on both rigid and non-rigid 3D object datasets have demonstrated the promising performance of our method.
- Score: 5.3485743892868545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of generating point clouds of 3D objects. Instead of
discretizing the object into 3D voxels with huge computational cost and
resolution limitations, we propose a novel geometry image based generator (GIG)
to convert the 3D point cloud generation problem to a 2D geometry image
generation problem. Since the geometry image is a completely regular 2D array
that contains the surface points of the 3D object, it leverages both the
regularity of the 2D array and the geodesic neighborhood of the 3D surface.
Thus, one significant benefit of our GIG is that it allows us to directly
generate the 3D point clouds using efficient 2D image generation networks.
Experiments on both rigid and non-rigid 3D object datasets have demonstrated
the promising performance of our method to not only create plausible and novel
3D objects, but also learn a probabilistic latent space that well supports the
shape editing like interpolation and arithmetic.
Related papers
- Deep Geometric Moments Promote Shape Consistency in Text-to-3D Generation [27.43973967994717]
MT3D is a text-to-3D generative model that leverages a high-fidelity 3D object to overcome viewpoint bias.
We employ depth maps derived from a high-quality 3D model as control signals to guarantee that the generated 2D images preserve the fundamental shape and structure.
By incorporating geometric details from a 3D asset, MT3D enables the creation of diverse and geometrically consistent objects.
arXiv Detail & Related papers (2024-08-12T06:25:44Z) - ScalingGaussian: Enhancing 3D Content Creation with Generative Gaussian Splatting [30.99112626706754]
The creation of high-quality 3D assets is paramount for applications in digital heritage, entertainment, and robotics.
Traditionally, this process necessitates skilled professionals and specialized software for modeling.
We introduce a novel 3D content creation framework, which generates 3D textures efficiently.
arXiv Detail & Related papers (2024-07-26T18:26:01Z) - What You See is What You GAN: Rendering Every Pixel for High-Fidelity
Geometry in 3D GANs [82.3936309001633]
3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries.
Yet, the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution.
We propose techniques to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail.
arXiv Detail & Related papers (2024-01-04T18:50:38Z) - Points-to-3D: Bridging the Gap between Sparse Points and
Shape-Controllable Text-to-3D Generation [16.232803881159022]
We propose a flexible framework of Points-to-3D to bridge the gap between sparse yet freely available 3D points and realistic shape-controllable 3D generation.
The core idea of Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation.
arXiv Detail & Related papers (2023-07-26T02:16:55Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - Improving 3D-aware Image Synthesis with A Geometry-aware Discriminator [68.0533826852601]
3D-aware image synthesis aims at learning a generative model that can render photo-realistic 2D images while capturing decent underlying 3D shapes.
Existing methods fail to obtain moderate 3D shapes.
We propose a geometry-aware discriminator to improve 3D-aware GANs.
arXiv Detail & Related papers (2022-09-30T17:59:37Z) - DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images [15.712721653893636]
DM-NeRF is among the first to simultaneously reconstruct, decompose, manipulate and render complex 3D scenes in a single pipeline.
Our method can accurately decompose all 3D objects from 2D views, allowing any interested object to be freely manipulated in 3D space.
arXiv Detail & Related papers (2022-08-15T14:32:10Z) - Unsupervised Learning of Fine Structure Generation for 3D Point Clouds
by 2D Projection Matching [66.98712589559028]
We propose an unsupervised approach for 3D point cloud generation with fine structures.
Our method can recover fine 3D structures from 2D silhouette images at different resolutions.
arXiv Detail & Related papers (2021-08-08T22:15:31Z) - Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D
Image GANs [156.1209884183522]
State-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold.
We present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only.
arXiv Detail & Related papers (2020-11-02T09:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.