Evaluating 3D Shape Analysis Methods for Robustness to Rotation
Invariance
- URL: http://arxiv.org/abs/2305.18557v1
- Date: Mon, 29 May 2023 18:39:31 GMT
- Title: Evaluating 3D Shape Analysis Methods for Robustness to Rotation
Invariance
- Authors: Supriya Gadi Patil, Angel X. Chang, Manolis Savva
- Abstract summary: This paper analyzes the robustness of recent 3D shape descriptors to SO(3) rotations.
We consider a database of 3D indoor scenes, where objects occur in different orientations.
- Score: 22.306775502181818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper analyzes the robustness of recent 3D shape descriptors to SO(3)
rotations, something that is fundamental to shape modeling. Specifically, we
formulate the task of rotated 3D object instance detection. To do so, we
consider a database of 3D indoor scenes, where objects occur in different
orientations. We benchmark different methods for feature extraction and
classification in the context of this task. We systematically contrast
different choices in a variety of experimental settings investigating the
impact on the performance of different rotation distributions, different
degrees of partial observations on the object, and the different levels of
difficulty of negative pairs. Our study, on a synthetic dataset of 3D scenes
where objects instances occur in different orientations, reveals that deep
learning-based rotation invariant methods are effective for relatively easy
settings with easy-to-distinguish pairs. However, their performance decreases
significantly when the difference in rotations on the input pair is large, or
when the degree of observation of input objects is reduced, or the difficulty
level of input pair is increased. Finally, we connect feature encodings
designed for rotation-invariant methods to 3D geometry that enable them to
acquire the property of rotation invariance.
Related papers
- PaRot: Patch-Wise Rotation-Invariant Network via Feature Disentanglement
and Pose Restoration [16.75367717130046]
State-of-the-art models are not robust to rotations, which remains an unknown prior to real applications.
We introduce a novel Patch-wise Rotation-invariant network (PaRot)
Our disentanglement module extracts high-quality rotation-robust features and the proposed lightweight model achieves competitive results.
arXiv Detail & Related papers (2023-02-06T02:13:51Z) - Category-Level 6D Object Pose Estimation with Flexible Vector-Based
Rotation Representation [51.67545893892129]
We propose a novel 3D graph convolution based pipeline for category-level 6D pose and size estimation from monocular RGB-D images.
We first design an orientation-aware autoencoder with 3D graph convolution for latent feature learning.
Then, to efficiently decode the rotation information from the latent feature, we design a novel flexible vector-based decomposable rotation representation.
arXiv Detail & Related papers (2022-12-09T02:13:43Z) - Equivariant Point Network for 3D Point Cloud Analysis [17.689949017410836]
We propose an effective and practical SE(3) (3D translation and rotation) equivariant network for point cloud analysis.
First, we present SE(3) separable point convolution, a novel framework that breaks down the 6D convolution into two separable convolutional operators.
Second, we introduce an attention layer to effectively harness the expressiveness of the equivariant features.
arXiv Detail & Related papers (2021-03-25T21:57:10Z) - FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose
Estimation with Decoupled Rotation Mechanism [49.89268018642999]
We propose a fast shape-based network (FS-Net) with efficient category-level feature extraction for 6D pose estimation.
The proposed method achieves state-of-the-art performance in both category- and instance-level 6D object pose estimation.
arXiv Detail & Related papers (2021-03-12T03:07:24Z) - Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes [86.2129580231191]
Adjoint Rigid Transform (ART) Network is a neural module which can be integrated with a variety of 3D networks.
ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks.
We will release our code and pre-trained models for further research.
arXiv Detail & Related papers (2021-02-01T20:58:45Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Rotation-Invariant Point Convolution With Multiple Equivariant
Alignments [1.0152838128195467]
We show that using rotation-equivariant alignments, it is possible to make any convolutional layer rotation-invariant.
With this core layer, we design rotation-invariant architectures which improve state-of-the-art results in both object classification and semantic segmentation.
arXiv Detail & Related papers (2020-12-07T20:47:46Z) - Rotation-Invariant Local-to-Global Representation Learning for 3D Point
Cloud [42.86112554931754]
We propose a local-to-global representation learning algorithm for 3D point cloud data.
Our model takes advantage of multi-level abstraction based on graph convolutional neural networks.
The proposed algorithm presents the state-of-the-art performance on the rotation-augmented 3D object recognition and segmentation benchmarks.
arXiv Detail & Related papers (2020-10-07T10:30:20Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - An Analysis of SVD for Deep Rotation Estimation [63.97835949897361]
We present a theoretical analysis that shows SVD is the natural choice for projecting onto the rotation group.
Our analysis shows simply replacing existing representations with the SVD orthogonalization procedure obtains state of the art performance in many deep learning applications.
arXiv Detail & Related papers (2020-06-25T17:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.