SurFit: Learning to Fit Surfaces Improves Few Shot Learning on Point
Clouds
- URL: http://arxiv.org/abs/2112.13942v1
- Date: Mon, 27 Dec 2021 23:55:36 GMT
- Title: SurFit: Learning to Fit Surfaces Improves Few Shot Learning on Point
Clouds
- Authors: Gopal Sharma and Bidya Dash and Matheus Gadelha and Aruni RoyChowdhury
and Marios Loizou and Evangelos Kalogerakis and Liangliang Cao and Erik
Learned-Miller and Rui Wang andSubhransu Maji
- Abstract summary: SurFit is a simple approach for label efficient learning of 3D shape segmentation networks.
It is based on a self-supervised task of decomposing the surface of a 3D shape into geometric primitives.
- Score: 48.61222927399794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SurFit, a simple approach for label efficient learning of 3D shape
segmentation networks. SurFit is based on a self-supervised task of decomposing
the surface of a 3D shape into geometric primitives. It can be readily applied
to existing network architectures for 3D shape segmentation and improves their
performance in the few-shot setting, as we demonstrate in the widely used
ShapeNet and PartNet benchmarks. SurFit outperforms the prior state-of-the-art
in this setting, suggesting that decomposability into primitives is a useful
prior for learning representations predictive of semantic parts. We present a
number of experiments varying the choice of geometric primitives and downstream
tasks to demonstrate the effectiveness of the method.
Related papers
- Attention-based Part Assembly for 3D Volumetric Shape Modeling [0.0]
We propose a VoxAttention network architecture for attention-based part assembly.
Experimental results show that our method outperforms most state-of-the-art methods for the part relation-aware 3D shape modeling task.
arXiv Detail & Related papers (2023-04-17T16:53:27Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections
with Deep Reinforcement Learning [0.0]
We present to the best of our knowledge the first 3D shape reconstruction network to solve this task.
Our method is based on applying a Reinforcement Learning algorithm to learn how to effectively parse the shape.
arXiv Detail & Related papers (2022-10-22T17:48:12Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - iSeg3D: An Interactive 3D Shape Segmentation Tool [48.784624011210475]
We propose an effective annotation tool, named iSeg for 3D shape.
Under our observation, most objects can be considered as the composition of finite primitive shapes.
We train iSeg3D model on our built primitive-composed shape data to learn the geometric prior knowledge in a self-supervised manner.
arXiv Detail & Related papers (2021-12-24T08:15:52Z) - Learning Compositional Shape Priors for Few-Shot 3D Reconstruction [36.40776735291117]
We show that complex encoder-decoder architectures exploit large amounts of per-category data.
We propose three ways to learn a class-specific global shape prior, directly from data.
Experiments on the popular ShapeNet dataset show that our method outperforms a zero-shot baseline by over 40%.
arXiv Detail & Related papers (2021-06-11T14:55:49Z) - Deep Implicit Moving Least-Squares Functions for 3D Reconstruction [23.8586965588835]
In this work, we turn the discrete point sets into smooth surfaces by introducing the well-known implicit moving least-squares (IMLS) surface formulation.
We incorporate IMLS surface generation into deep neural networks for inheriting both the flexibility of point sets and the high quality of implicit surfaces.
Our experiments on 3D object reconstruction demonstrate that IMLSNets outperform state-of-the-art learning-based methods in terms of reconstruction quality and computational efficiency.
arXiv Detail & Related papers (2021-03-23T02:26:07Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z) - Unsupervised 3D Learning for Shape Analysis via Multiresolution Instance
Discrimination [27.976848222058187]
We propose an unsupervised method for learning a generic and efficient shape encoding network for different shape analysis tasks.
We adapt HR-Net to octree-based convolutional neural networks for jointly encoding shape and point features.
Our method achieves competitive performance to supervised methods, especially in tasks with a small labeled dataset.
arXiv Detail & Related papers (2020-08-03T17:58:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.