CloudWalker: Random walks for 3D point cloud shape analysis
- URL: http://arxiv.org/abs/2112.01050v4
- Date: Wed, 17 May 2023 07:02:03 GMT
- Title: CloudWalker: Random walks for 3D point cloud shape analysis
- Authors: Adi Mesika, Yizhak Ben-Shabat and Ayellet Tal
- Abstract summary: We propose CloudWalker, a novel method for learning 3D shapes using random walks.
Our approach achieves state-of-the-art results for two 3D shape analysis tasks: classification and retrieval.
- Score: 20.11028799145883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds are gaining prominence as a method for representing 3D shapes,
but their irregular structure poses a challenge for deep learning methods. In
this paper we propose CloudWalker, a novel method for learning 3D shapes using
random walks. Previous works attempt to adapt Convolutional Neural Networks
(CNNs) or impose a grid or mesh structure to 3D point clouds. This work
presents a different approach for representing and learning the shape from a
given point set. The key idea is to impose structure on the point set by
multiple random walks through the cloud for exploring different regions of the
3D object. Then we learn a per-point and per-walk representation and aggregate
multiple walk predictions at inference. Our approach achieves state-of-the-art
results for two 3D shape analysis tasks: classification and retrieval.
Related papers
- PointInverter: Point Cloud Reconstruction and Editing via a Generative
Model with Shape Priors [25.569519066857705]
We propose a new method for mapping a 3D point cloud to the latent space of a 3D generative adversarial network.
Our method outperforms previous GAN inversion methods for 3D point clouds.
arXiv Detail & Related papers (2022-11-16T06:29:29Z) - SNAKE: Shape-aware Neural 3D Keypoint Field [62.91169625183118]
Detecting 3D keypoints from point clouds is important for shape reconstruction.
This work investigates the dual question: can shape reconstruction benefit 3D keypoint detection?
We propose a novel unsupervised paradigm named SNAKE, which is short for shape-aware neural 3D keypoint field.
arXiv Detail & Related papers (2022-06-03T17:58:43Z) - MetaSets: Meta-Learning on Point Sets for Generalizable Representations [100.5981809166658]
We study a new problem of 3D Domain Generalization (3DDG) with the goal to generalize the model to other unseen domains of point clouds without access to them in the training process.
We propose to tackle this problem via MetaSets, which meta-learns point cloud representations from a group of classification tasks on carefully-designed transformed point sets.
We design two benchmarks for Sim-to-Real transfer of 3D point clouds. Experimental results show that MetaSets outperforms existing 3D deep learning methods by large margins.
arXiv Detail & Related papers (2022-04-15T03:24:39Z) - CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D
Point Cloud Understanding [2.8661021832561757]
CrossPoint is a simple cross-modal contrastive learning approach to learn transferable 3D point cloud representations.
Our approach outperforms the previous unsupervised learning methods on a diverse range of downstream tasks including 3D object classification and segmentation.
arXiv Detail & Related papers (2022-03-01T18:59:01Z) - Voint Cloud: Multi-View Point Cloud Representation for 3D Understanding [80.04281842702294]
We introduce the concept of the multi-view point cloud (Voint cloud) representing each 3D point as a set of features extracted from several view-points.
This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation.
We deploy a Voint neural network (VointNet) with a theoretically established functional form to learn representations in the Voint space.
arXiv Detail & Related papers (2021-11-30T13:08:19Z) - Unsupervised Learning of Fine Structure Generation for 3D Point Clouds
by 2D Projection Matching [66.98712589559028]
We propose an unsupervised approach for 3D point cloud generation with fine structures.
Our method can recover fine 3D structures from 2D silhouette images at different resolutions.
arXiv Detail & Related papers (2021-08-08T22:15:31Z) - DeformerNet: A Deep Learning Approach to 3D Deformable Object
Manipulation [5.733365759103406]
We propose a novel approach to 3D deformable object manipulation leveraging a deep neural network called DeformerNet.
We explicitly use 3D point clouds as the state representation and apply Convolutional Neural Network on point clouds to learn the 3D features.
Once trained in an end-to-end fashion, DeformerNet directly maps the current point cloud of a deformable object, as well as a target point cloud shape, to the desired displacement in robot gripper position.
arXiv Detail & Related papers (2021-07-16T18:20:58Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Weakly-supervised 3D Shape Completion in the Wild [91.04095516680438]
We address the problem of learning 3D complete shape from unaligned and real-world partial point clouds.
We propose a weakly-supervised method to estimate both 3D canonical shape and 6-DoF pose for alignment, given multiple partial observations.
Experiments on both synthetic and real data show that it is feasible and promising to learn 3D shape completion through large-scale data without shape and pose supervision.
arXiv Detail & Related papers (2020-08-20T17:53:42Z) - MeshWalker: Deep Mesh Understanding by Random Walks [19.594977587417247]
We look at the most popular representation of 3D shapes in computer graphics - a triangular mesh - and ask how it can be utilized within deep learning.
This paper proposes a very different approach, termed MeshWalker, to learn the shape directly from a given mesh.
We show that our approach achieves state-of-the-art results for two fundamental shape analysis tasks.
arXiv Detail & Related papers (2020-06-09T15:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.