Learning to Orient Surfaces by Self-supervised Spherical CNNs
- URL: http://arxiv.org/abs/2011.03298v2
- Date: Fri, 13 Nov 2020 09:25:28 GMT
- Title: Learning to Orient Surfaces by Self-supervised Spherical CNNs
- Authors: Riccardo Spezialetti, Federico Stella, Marlon Marcon, Luciano Silva,
Samuele Salti, Luigi Di Stefano
- Abstract summary: Defining and reliably finding a canonical orientation for 3D surfaces is key to many Computer Vision and Robotics applications.
We show the feasibility of learning a robust canonical orientation for surfaces represented as point clouds.
Our method learns such feature maps from raw data by a self-supervised training procedure and robustly selects a rotation to transform the input point cloud into a learned canonical orientation.
- Score: 15.554429755106332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defining and reliably finding a canonical orientation for 3D surfaces is key
to many Computer Vision and Robotics applications. This task is commonly
addressed by handcrafted algorithms exploiting geometric cues deemed as
distinctive and robust by the designer. Yet, one might conjecture that humans
learn the notion of the inherent orientation of 3D objects from experience and
that machines may do so alike. In this work, we show the feasibility of
learning a robust canonical orientation for surfaces represented as point
clouds. Based on the observation that the quintessential property of a
canonical orientation is equivariance to 3D rotations, we propose to employ
Spherical CNNs, a recently introduced machinery that can learn equivariant
representations defined on the Special Orthogonal group SO(3). Specifically,
spherical correlations compute feature maps whose elements define 3D rotations.
Our method learns such feature maps from raw data by a self-supervised training
procedure and robustly selects a rotation to transform the input point cloud
into a learned canonical orientation. Thereby, we realize the first end-to-end
learning approach to define and extract the canonical orientation of 3D shapes,
which we aptly dub Compass. Experiments on several public datasets prove its
effectiveness at orienting local surface patches as well as whole objects.
Related papers
- Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - ParaPoint: Learning Global Free-Boundary Surface Parameterization of 3D Point Clouds [52.03819676074455]
ParaPoint is an unsupervised neural learning pipeline for achieving global free-boundary surface parameterization.
This work makes the first attempt to investigate neural point cloud parameterization that pursues both global mappings and free boundaries.
arXiv Detail & Related papers (2024-03-15T14:35:05Z) - Self-supervised Learning of Rotation-invariant 3D Point Set Features using Transformer and its Self-distillation [3.1652399282742536]
This paper proposes a novel self-supervised learning framework for acquiring accurate and rotation-invariant 3D point set features at object-level.
We employ a self-attention mechanism to refine the tokens and aggregate them into an expressive rotation-invariant feature per 3D point set.
Our proposed algorithm learns rotation-invariant 3D point set features that are more accurate than those learned by existing algorithms.
arXiv Detail & Related papers (2023-08-09T06:03:07Z) - Rotation-Invariant Random Features Provide a Strong Baseline for Machine
Learning on 3D Point Clouds [10.166033101890227]
We propose a simple and general-purpose method for learning rotation-invariant functions of three-dimensional point cloud data.
We show through experiments that our method matches or outperforms the performance of general-purpose rotation-invariant neural networks.
arXiv Detail & Related papers (2023-07-27T20:18:11Z) - SNAKE: Shape-aware Neural 3D Keypoint Field [62.91169625183118]
Detecting 3D keypoints from point clouds is important for shape reconstruction.
This work investigates the dual question: can shape reconstruction benefit 3D keypoint detection?
We propose a novel unsupervised paradigm named SNAKE, which is short for shape-aware neural 3D keypoint field.
arXiv Detail & Related papers (2022-06-03T17:58:43Z) - ConDor: Self-Supervised Canonicalization of 3D Pose for Partial Shapes [55.689763519293464]
ConDor is a self-supervised method that learns to canonicalize the 3D orientation and position for full and partial 3D point clouds.
During inference, our method takes an unseen full or partial 3D point cloud at an arbitrary pose and outputs an equivariant canonical pose.
arXiv Detail & Related papers (2022-01-19T18:57:21Z) - Deep regression on manifolds: a 3D rotation case study [0.0]
We show that a differentiable function mapping arbitrary inputs of a Euclidean space onto this manifold should satisfy to allow proper training.
We compare various differentiable mappings on the 3D rotation space, and conjecture about the importance of the local linearity of the mapping.
We notably show that a mapping based on Procrustes orthonormalization of a 3x3 matrix generally performs best among the ones considered.
arXiv Detail & Related papers (2021-03-30T13:07:36Z) - Concentric Spherical GNN for 3D Representation Learning [53.45704095146161]
We propose a novel multi-resolution convolutional architecture for learning over concentric spherical feature maps.
Our hierarchical architecture is based on alternatively learning to incorporate both intra-sphere and inter-sphere information.
We demonstrate the effectiveness of our approach in improving state-of-the-art performance on 3D classification tasks with rotated data.
arXiv Detail & Related papers (2021-03-18T19:05:04Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.