Generalizable Local Feature Pre-training for Deformable Shape Analysis
- URL: http://arxiv.org/abs/2303.15104v1
- Date: Mon, 27 Mar 2023 11:13:46 GMT
- Title: Generalizable Local Feature Pre-training for Deformable Shape Analysis
- Authors: Souhaib Attaiki and Lei Li and Maks Ovsjanikov
- Abstract summary: Transfer learning is fundamental for addressing problems in settings with little training data.
We analyze the link between feature locality and transferability in tasks involving deformable 3D objects.
We propose a differentiable method for optimizing the receptive field within 3D transfer learning.
- Score: 36.44119664239748
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transfer learning is fundamental for addressing problems in settings with
little training data. While several transfer learning approaches have been
proposed in 3D, unfortunately, these solutions typically operate on an entire
3D object or even scene-level and thus, as we show, fail to generalize to new
classes, such as deformable organic shapes. In addition, there is currently a
lack of understanding of what makes pre-trained features transferable across
significantly different 3D shape categories. In this paper, we make a step
toward addressing these challenges. First, we analyze the link between feature
locality and transferability in tasks involving deformable 3D objects, while
also comparing different backbones and losses for local feature pre-training.
We observe that with proper training, learned features can be useful in such
tasks, but, crucially, only with an appropriate choice of the receptive field
size. We then propose a differentiable method for optimizing the receptive
field within 3D transfer learning. Jointly, this leads to the first learnable
features that can successfully generalize to unseen classes of 3D shapes such
as humans and animals. Our extensive experiments show that this approach leads
to state-of-the-art results on several downstream tasks such as segmentation,
shape correspondence, and classification. Our code is available at
\url{https://github.com/pvnieo/vader}.
Related papers
- ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel Diffusion [19.30740914413954]
This paper proposes ShapeShifter, a new 3D generative model that learns to synthesize shape variations based on a single reference model.
We show that our resulting variations better capture the fine details of their original input and can handle more general types of surfaces than previous SDF-based methods.
arXiv Detail & Related papers (2025-02-04T10:02:40Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - 3D Adversarial Augmentations for Robust Out-of-Domain Predictions [115.74319739738571]
We focus on improving the generalization to out-of-domain data.
We learn a set of vectors that deform the objects in an adversarial fashion.
We perform adversarial augmentation by applying the learned sample-independent vectors to the available objects when training a model.
arXiv Detail & Related papers (2023-08-29T17:58:55Z) - Meta-Learning 3D Shape Segmentation Functions [16.119694625781992]
We introduce an auxiliary deep neural network as a meta-learner which takes as input a 3D shape and predicts the prior over the respective 3D segmentation function space.
We show in experiments that our meta-learning approach, denoted as Meta-3DSeg, leads to improvements on unsupervised 3D shape segmentation.
arXiv Detail & Related papers (2021-10-08T01:50:54Z) - Point Discriminative Learning for Unsupervised Representation Learning
on 3D Point Clouds [54.31515001741987]
We propose a point discriminative learning method for unsupervised representation learning on 3D point clouds.
We achieve this by imposing a novel point discrimination loss on the middle level and global level point features.
Our method learns powerful representations and achieves new state-of-the-art performance.
arXiv Detail & Related papers (2021-08-04T15:11:48Z) - Learning Compositional Shape Priors for Few-Shot 3D Reconstruction [36.40776735291117]
We show that complex encoder-decoder architectures exploit large amounts of per-category data.
We propose three ways to learn a class-specific global shape prior, directly from data.
Experiments on the popular ShapeNet dataset show that our method outperforms a zero-shot baseline by over 40%.
arXiv Detail & Related papers (2021-06-11T14:55:49Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.