Stable and Consistent Prediction of 3D Characteristic Orientation via
Invariant Residual Learning
- URL: http://arxiv.org/abs/2306.11406v1
- Date: Tue, 20 Jun 2023 09:29:03 GMT
- Title: Stable and Consistent Prediction of 3D Characteristic Orientation via
Invariant Residual Learning
- Authors: Seungwook Kim, Chunghyun Park, Yoonwoo Jeong, Jaesik Park, Minsu Cho
- Abstract summary: We introduce a novel method to decouple the shape geometry and semantics of the input point cloud to achieve both stability and consistency.
In experiments, the proposed method not only demonstrates superior stability and consistency but also exhibits state-of-the-art performances.
- Score: 42.44798841872727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to predict reliable characteristic orientations of 3D point clouds
is an important yet challenging problem, as different point clouds of the same
class may have largely varying appearances. In this work, we introduce a novel
method to decouple the shape geometry and semantics of the input point cloud to
achieve both stability and consistency. The proposed method integrates
shape-geometry-based SO(3)-equivariant learning and shape-semantics-based
SO(3)-invariant residual learning, where a final characteristic orientation is
obtained by calibrating an SO(3)-equivariant orientation hypothesis using an
SO(3)-invariant residual rotation. In experiments, the proposed method not only
demonstrates superior stability and consistency but also exhibits
state-of-the-art performances when applied to point cloud part segmentation,
given randomly rotated inputs.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Learning SO(3)-Invariant Semantic Correspondence via Local Shape Transform [62.27337227010514]
We introduce a novel self-supervised Rotation-Invariant 3D correspondence learner with Local Shape Transform, dubbed RIST.
RIST learns to establish dense correspondences between shapes even under challenging intra-class variations and arbitrary orientations.
RIST demonstrates state-of-the-art performances on 3D part label transfer and semantic keypoint transfer given arbitrarily rotated point cloud pairs.
arXiv Detail & Related papers (2024-04-17T08:09:25Z) - Unsupervised diffeomorphic cardiac image registration using
parameterization of the deformation field [6.343400988017304]
This study proposes an end-to-end unsupervised diffeomorphic deformable registration framework based on moving mesh parameterization.
The effectiveness of the algorithm is investigated by evaluating the proposed method on three different data sets including 2D and 3D cardiac MRI scans.
arXiv Detail & Related papers (2022-08-28T19:34:10Z) - E2PN: Efficient SE(3)-Equivariant Point Network [12.520265159777255]
This paper proposes a convolution structure for learning SE(3)-equivariant features from 3D point clouds.
It can be viewed as an equivariant version of kernel point convolutions (KPConv), a widely used convolution form to process point cloud data.
arXiv Detail & Related papers (2022-06-11T02:15:46Z) - Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons [59.83721247071963]
We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.
Our encoder is stable and consistent, meaning that the shape encoding is purely pose-invariant.
The extracted rotation and translation are able to semantically align different input shapes of the same class to a common canonical pose.
arXiv Detail & Related papers (2022-04-03T21:00:44Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Fully Steerable 3D Spherical Neurons [14.86655504533083]
We propose a steerable feed-forward learning-based approach that consists of spherical decision surfaces and operates on point clouds.
Due to the inherent geometric 3D structure of our theory, we derive a 3D steerability constraint for its atomic parts.
We show how the model parameters are fully steerable at inference time.
arXiv Detail & Related papers (2021-06-02T16:30:02Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Equivariant Point Network for 3D Point Cloud Analysis [17.689949017410836]
We propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis.
First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators.
Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features.
arXiv Detail & Related papers (2021-03-25T21:57:10Z) - Rotation-Invariant Point Convolution With Multiple Equivariant
Alignments [1.0152838128195467]
We show that using rotation-equivariant alignments, it is possible to make any convolutional layer rotation-invariant.
With this core layer, we design rotation-invariant architectures which improve state-of-the-art results in both object classification and semantic segmentation.
arXiv Detail & Related papers (2020-12-07T20:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.