Unsupervised 3D Shape Completion through GAN Inversion
- URL: http://arxiv.org/abs/2104.13366v2
- Date: Thu, 29 Apr 2021 13:09:32 GMT
- Title: Unsupervised 3D Shape Completion through GAN Inversion
- Authors: Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai
Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy
- Abstract summary: We present ShapeInversion, which introduces Generative Adrial Network (GAN) inversion to shape completion for the first time.
ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best fits the given partial input.
On the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA unsupervised method, and is comparable with supervised methods that are learned using paired data.
- Score: 116.27680045885849
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most 3D shape completion approaches rely heavily on partial-complete shape
pairs and learn in a fully supervised manner. Despite their impressive
performances on in-domain data, when generalizing to partial shapes in other
forms or real-world partial scans, they often obtain unsatisfactory results due
to domain gaps. In contrast to previous fully supervised approaches, in this
paper we present ShapeInversion, which introduces Generative Adversarial
Network (GAN) inversion to shape completion for the first time. ShapeInversion
uses a GAN pre-trained on complete shapes by searching for a latent code that
gives a complete shape that best reconstructs the given partial input. In this
way, ShapeInversion no longer needs paired training data, and is capable of
incorporating the rich prior captured in a well-trained generative model. On
the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA
unsupervised method, and is comparable with supervised methods that are learned
using paired data. It also demonstrates remarkable generalization ability,
giving robust results for real-world scans and partial inputs of various forms
and incompleteness levels. Importantly, ShapeInversion naturally enables a
series of additional abilities thanks to the involvement of a pre-trained GAN,
such as producing multiple valid complete shapes for an ambiguous partial
input, as well as shape manipulation and interpolation.
Related papers
- 3D Shape Completion on Unseen Categories:A Weakly-supervised Approach [61.76304400106871]
We introduce a novel weakly-supervised framework to reconstruct the complete shapes from unseen categories.
We first propose an end-to-end prior-assisted shape learning network that leverages data from the seen categories to infer a coarse shape.
In addition, we propose a self-supervised shape refinement model to further refine the coarse shape.
arXiv Detail & Related papers (2024-01-19T09:41:09Z) - Diverse Shape Completion via Style Modulated Generative Adversarial
Networks [0.0]
Shape completion aims to recover the full 3D geometry of an object from a partial observation.
This problem is inherently multi-modal since there can be many ways to plausibly complete the missing regions of a shape.
We propose a novel conditional generative adversarial network that can produce many diverse plausible completions of a partially observed point cloud.
arXiv Detail & Related papers (2023-11-18T23:40:20Z) - DiffComplete: Diffusion-based Generative 3D Shape Completion [114.43353365917015]
We introduce a new diffusion-based approach for shape completion on 3D range scans.
We strike a balance between realism, multi-modality, and high fidelity.
DiffComplete sets a new SOTA performance on two large-scale 3D shape completion benchmarks.
arXiv Detail & Related papers (2023-06-28T16:07:36Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - Implicit Shape Completion via Adversarial Shape Priors [46.48590354256945]
We present a novel neural implicit shape method for partial point cloud completion.
We combine a conditional Deep-SDF architecture with learned, adversarial shape priors.
We train a PointNet++ discriminator that impels the generator to produce plausible, globally consistent reconstructions.
arXiv Detail & Related papers (2022-04-21T12:49:59Z) - ShapeFormer: Transformer-based Shape Completion via Sparse
Representation [41.33457875133559]
We present ShapeFormer, a network that produces a distribution of object completions conditioned on incomplete, and possibly noisy, point clouds.
The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
arXiv Detail & Related papers (2022-01-25T13:58:30Z) - Shape Completion via IMLE [9.716911810130576]
Shape completion is the problem of completing partial input shapes such as partial scans.
We propose a novel multimodal shape completion technique that is effectively able to learn a one-to-many mapping.
We show that our method is superior to alternatives in terms of completeness and diversity of shapes.
arXiv Detail & Related papers (2021-06-30T17:45:10Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.