Robust 3D Tracking with Quality-Aware Shape Completion
- URL: http://arxiv.org/abs/2312.10608v1
- Date: Sun, 17 Dec 2023 04:50:24 GMT
- Title: Robust 3D Tracking with Quality-Aware Shape Completion
- Authors: Jingwen Zhang, Zikun Zhou, Guangming Lu, Jiandong Tian, Wenjie Pei
- Abstract summary: We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
- Score: 67.9748164949519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D single object tracking remains a challenging problem due to the sparsity
and incompleteness of the point clouds. Existing algorithms attempt to address
the challenges in two strategies. The first strategy is to learn dense
geometric features based on the captured sparse point cloud. Nevertheless, it
is quite a formidable task since the learned dense geometric features are with
high uncertainty for depicting the shape of the target object. The other
strategy is to aggregate the sparse geometric features of multiple templates to
enrich the shape information, which is a routine solution in 2D tracking.
However, aggregating the coarse shape representations can hardly yield a
precise shape representation. Different from 2D pixels, 3D points of different
frames can be directly fused by coordinate transform, i.e., shape completion.
Considering that, we propose to construct a synthetic target representation
composed of dense and complete point clouds depicting the target shape
precisely by shape completion for robust 3D tracking. Specifically, we design a
voxelized 3D tracking framework with shape completion, in which we propose a
quality-aware shape completion mechanism to alleviate the adverse effect of
noisy historical predictions. It enables us to effectively construct and
leverage the synthetic target representation. Besides, we also develop a
voxelized relation modeling module and box refinement module to improve
tracking performance. Favorable performance against state-of-the-art algorithms
on three benchmarks demonstrates the effectiveness and generalization ability
of our method.
Related papers
- Object-Centric Domain Randomization for 3D Shape Reconstruction in the Wild [22.82439286651921]
One of the biggest challenges in single-view 3D shape reconstruction in the wild is the scarcity of 3D shape, 2D image>-paired data from real-world environments.
Inspired by remarkable achievements via domain randomization, we propose ObjectDR which synthesizes such paired data via a random simulation of visual variations in object appearances and backgrounds.
arXiv Detail & Related papers (2024-03-21T16:40:10Z) - Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - Uncertainty-aware 3D Object-Level Mapping with Deep Shape Priors [15.34487368683311]
We propose a framework that can reconstruct high-quality object-level maps for unknown objects.
Our approach takes multiple RGB-D images as input and outputs dense 3D shapes and 9-DoF poses for detected objects.
We derive a probabilistic formulation that propagates shape and pose uncertainty through two novel loss functions.
arXiv Detail & Related papers (2023-09-17T00:48:19Z) - MS23D: A 3D Object Detection Method Using Multi-Scale Semantic Feature Points to Construct 3D Feature Layer [4.644319899528183]
LiDAR point clouds can effectively depict the motion and posture of objects in three-dimensional space.
In autonomous driving scenarios, the sparsity and hollowness of point clouds create some difficulties for voxel-based methods.
We propose a two-stage 3D object detection framework, called MS23D.
arXiv Detail & Related papers (2023-08-31T08:03:25Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - AutoShape: Real-Time Shape-Aware Monocular 3D Object Detection [15.244852122106634]
We propose an approach for incorporating the shape-aware 2D/3D constraints into the 3D detection framework.
Specifically, we employ the deep neural network to learn distinguished 2D keypoints in the 2D image domain.
For generating the ground truth of 2D/3D keypoints, an automatic model-fitting approach has been proposed.
arXiv Detail & Related papers (2021-08-25T08:50:06Z) - Hard Example Generation by Texture Synthesis for Cross-domain Shape
Similarity Learning [97.56893524594703]
Image-based 3D shape retrieval (IBSR) aims to find the corresponding 3D shape of a given 2D image from a large 3D shape database.
metric learning with some adaptation techniques seems to be a natural solution to shape similarity learning.
We develop a geometry-focused multi-view metric learning framework empowered by texture synthesis.
arXiv Detail & Related papers (2020-10-23T08:52:00Z) - SSN: Shape Signature Networks for Multi-class Object Detection from
Point Clouds [96.51884187479585]
We propose a novel 3D shape signature to explore the shape information from point clouds.
By incorporating operations of symmetry, convex hull and chebyshev fitting, the proposed shape sig-nature is not only compact and effective but also robust to the noise.
Experiments show that the proposed method performs remarkably better than existing methods on two large-scale datasets.
arXiv Detail & Related papers (2020-04-06T16:01:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.