PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape
Completion on Unseen Categories
- URL: http://arxiv.org/abs/2206.04916v1
- Date: Fri, 10 Jun 2022 07:34:10 GMT
- Title: PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape
Completion on Unseen Categories
- Authors: Yuchen Rao, Yinyu Nie, Angela Dai
- Abstract summary: We propose PatchComplete, which learns effective shape priors based on multi-resolution local patches.
Such patch-based priors avoid overfitting to specific train categories and enable reconstruction on entirely unseen categories at test time.
We demonstrate the effectiveness of our approach on synthetic ShapeNet data as well as challenging real-scanned objects from ScanNet.
- Score: 24.724113526984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While 3D shape representations enable powerful reasoning in many visual and
perception applications, learning 3D shape priors tends to be constrained to
the specific categories trained on, leading to an inefficient learning process,
particularly for general applications with unseen categories. Thus, we propose
PatchComplete, which learns effective shape priors based on multi-resolution
local patches, which are often more general than full shapes (e.g., chairs and
tables often both share legs) and thus enable geometric reasoning about unseen
class categories. To learn these shared substructures, we learn
multi-resolution patch priors across all train categories, which are then
associated to input partial shape observations by attention across the patch
priors, and finally decoded into a complete shape reconstruction. Such
patch-based priors avoid overfitting to specific train categories and enable
reconstruction on entirely unseen categories at test time. We demonstrate the
effectiveness of our approach on synthetic ShapeNet data as well as challenging
real-scanned objects from ScanNet, which include noise and clutter, improving
over state of the art in novel-category shape completion by 19.3% in chamfer
distance on ShapeNet, and 9.0% for ScanNet.
Related papers
- Beyond Complete Shapes: A quantitative Evaluation of 3D Shape Matching Algorithms [41.95394677818476]
Finding correspondences between 3D shapes is an important problem in computer vision, graphics and beyond.
We provide a generic and flexible framework for the procedural generation of challenging partial shape matching scenarios.
We manually create cross-dataset correspondences between seven existing (complete geometry) shape matching datasets, leading to a total of 2543 shapes.
arXiv Detail & Related papers (2024-11-05T21:08:19Z) - 3D Shape Completion on Unseen Categories:A Weakly-supervised Approach [61.76304400106871]
We introduce a novel weakly-supervised framework to reconstruct the complete shapes from unseen categories.
We first propose an end-to-end prior-assisted shape learning network that leverages data from the seen categories to infer a coarse shape.
In addition, we propose a self-supervised shape refinement model to further refine the coarse shape.
arXiv Detail & Related papers (2024-01-19T09:41:09Z) - 3D Textured Shape Recovery with Learned Geometric Priors [58.27543892680264]
This technical report presents our approach to address limitations by incorporating learned geometric priors.
We generate a SMPL model from learned pose prediction and fuse it into the partial input to add prior knowledge of human bodies.
We also propose a novel completeness-aware bounding box adaptation for handling different levels of scales.
arXiv Detail & Related papers (2022-09-07T16:03:35Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval
and Deformation [59.70430570779819]
We introduce a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes.
Our key insight is to copy and deform patches from the partial input to complete missing regions.
We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps.
arXiv Detail & Related papers (2022-07-24T18:59:09Z) - Denoise and Contrast for Category Agnostic Shape Completion [48.66519783934386]
We present a deep learning model that exploits the power of self-supervision to perform 3D point cloud completion.
A denoising pretext task provides the network with the needed local cues, decoupled from the high-level semantics.
contrastive learning maximizes the agreement between variants of the same shape with different missing portions.
arXiv Detail & Related papers (2021-03-30T20:33:24Z) - PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape
Representations [75.42959184226702]
We present a new mid-level patch-based surface representation for object-agnostic training.
We show several applications of our new representation, including shape and partial point cloud completion.
arXiv Detail & Related papers (2020-08-04T15:34:46Z) - Fine-Grained 3D Shape Classification with Hierarchical Part-View
Attentions [70.0171362989609]
We propose a novel fine-grained 3D shape classification method named FG3D-Net to capture the fine-grained local details of 3D shapes from multiple rendered views.
Our results under the fine-grained 3D shape dataset show that our method outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-05-26T06:53:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.