3D Shape Completion with Test-Time Training
- URL: http://arxiv.org/abs/2410.18668v1
- Date: Thu, 24 Oct 2024 11:59:32 GMT
- Title: 3D Shape Completion with Test-Time Training
- Authors: Michael Schopf-Kuester, Zorah Lähner, Michael Moeller,
- Abstract summary: We use a decoder network motivated by related work on the prediction of signed distance functions (DeepSDF)
We demonstrate that our overfitting to the fractured parts leads to significant improvements in the restoration of eight different shape categories of the ShapeNet data set in terms of their chamfer distances.
- Score: 6.764513343390546
- License:
- Abstract: This work addresses the problem of \textit{shape completion}, i.e., the task of restoring incomplete shapes by predicting their missing parts. While previous works have often predicted the fractured and restored shape in one step, we approach the task by separately predicting the fractured and newly restored parts, but ensuring these predictions are interconnected. We use a decoder network motivated by related work on the prediction of signed distance functions (DeepSDF). In particular, our representation allows us to consider test-time-training, i.e., finetuning network parameters to match the given incomplete shape more accurately during inference. While previous works often have difficulties with artifacts around the fracture boundary, we demonstrate that our overfitting to the fractured parts leads to significant improvements in the restoration of eight different shape categories of the ShapeNet data set in terms of their chamfer distances.
Related papers
- 3D Shape Completion on Unseen Categories:A Weakly-supervised Approach [61.76304400106871]
We introduce a novel weakly-supervised framework to reconstruct the complete shapes from unseen categories.
We first propose an end-to-end prior-assisted shape learning network that leverages data from the seen categories to infer a coarse shape.
In addition, we propose a self-supervised shape refinement model to further refine the coarse shape.
arXiv Detail & Related papers (2024-01-19T09:41:09Z) - DeepJoin: Learning a Joint Occupancy, Signed Distance, and Normal Field
Function for Shape Repair [0.684225774857327]
DeepJoin is an automated approach to generate high-resolution repairs for fractured shapes using deep neural networks.
We present a novel implicit shape representation for fractured shape repair that combines the occupancy function, signed distance function, and normal field.
arXiv Detail & Related papers (2022-11-22T16:44:57Z) - DeepMend: Learning Occupancy Functions to Represent Shape for Repair [0.6087960723103347]
DeepMend is a novel approach to reconstruct restorations to fractured shapes using learned occupancy functions.
We represent the occupancy of a fractured shape as the conjunction of the occupancy of an underlying complete shape and the fracture surface.
We show results with simulated fractures on synthetic and real-world scanned objects, and with scanned real fractured mugs.
arXiv Detail & Related papers (2022-10-11T18:42:20Z) - 3D Textured Shape Recovery with Learned Geometric Priors [58.27543892680264]
This technical report presents our approach to address limitations by incorporating learned geometric priors.
We generate a SMPL model from learned pose prediction and fuse it into the partial input to add prior knowledge of human bodies.
We also propose a novel completeness-aware bounding box adaptation for handling different levels of scales.
arXiv Detail & Related papers (2022-09-07T16:03:35Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - Implicit Shape Completion via Adversarial Shape Priors [46.48590354256945]
We present a novel neural implicit shape method for partial point cloud completion.
We combine a conditional Deep-SDF architecture with learned, adversarial shape priors.
We train a PointNet++ discriminator that impels the generator to produce plausible, globally consistent reconstructions.
arXiv Detail & Related papers (2022-04-21T12:49:59Z) - Point Scene Understanding via Disentangled Instance Mesh Reconstruction [21.92736190195887]
We propose aDisentangled Instance Mesh Reconstruction (DIMR) framework for effective point scene understanding.
A segmentation-based backbone is applied to reduce false positive object proposals.
We leverage a mesh-aware latent code space to disentangle the processes of shape completion and mesh generation.
arXiv Detail & Related papers (2022-03-31T06:36:07Z) - Unsupervised 3D Shape Completion through GAN Inversion [116.27680045885849]
We present ShapeInversion, which introduces Generative Adrial Network (GAN) inversion to shape completion for the first time.
ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best fits the given partial input.
On the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA unsupervised method, and is comparable with supervised methods that are learned using paired data.
arXiv Detail & Related papers (2021-04-27T17:53:46Z) - A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation [62.517760545209065]
We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
arXiv Detail & Related papers (2021-04-15T17:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.