Simultaneous Multiple Object Detection and Pose Estimation using 3D
Model Infusion with Monocular Vision
- URL: http://arxiv.org/abs/2211.11188v2
- Date: Tue, 22 Nov 2022 02:38:10 GMT
- Title: Simultaneous Multiple Object Detection and Pose Estimation using 3D
Model Infusion with Monocular Vision
- Authors: Congliang Li, Shijie Sun, Xiangyu Song, Huansheng Song, Naveed Akhtar
and Ajmal Saeed Mian
- Abstract summary: Multiple object detection and pose estimation are vital computer vision tasks.
We propose simultaneous neural modeling of both using monocular vision and 3D model infusion.
Our Simultaneous Multiple Object detection and Pose Estimation network (SMOPE-Net) is an end-to-end trainable multitasking network.
- Score: 21.710141497071373
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multiple object detection and pose estimation are vital computer vision
tasks. The latter relates to the former as a downstream problem in applications
such as robotics and autonomous driving. However, due to the high complexity of
both tasks, existing methods generally treat them independently, which is
sub-optimal. We propose simultaneous neural modeling of both using monocular
vision and 3D model infusion. Our Simultaneous Multiple Object detection and
Pose Estimation network (SMOPE-Net) is an end-to-end trainable multitasking
network with a composite loss that also provides the advantages of anchor-free
detections for efficient downstream pose estimation. To enable the annotation
of training data for our learning objective, we develop a Twin-Space object
labeling method and demonstrate its correctness analytically and empirically.
Using the labeling method, we provide the KITTI-6DoF dataset with $\sim7.5$K
annotated frames. Extensive experiments on KITTI-6DoF and the popular LineMod
datasets show a consistent performance gain with SMOPE-Net over existing pose
estimation methods. Here are links to our proposed SMOPE-Net, KITTI-6DoF
dataset, and LabelImg3D labeling tool.
Related papers
- Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Leveraging Large-Scale Pretrained Vision Foundation Models for
Label-Efficient 3D Point Cloud Segmentation [67.07112533415116]
We present a novel framework that adapts various foundational models for the 3D point cloud segmentation task.
Our approach involves making initial predictions of 2D semantic masks using different large vision models.
To generate robust 3D semantic pseudo labels, we introduce a semantic label fusion strategy that effectively combines all the results via voting.
arXiv Detail & Related papers (2023-11-03T15:41:15Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - The Devil is in the Task: Exploiting Reciprocal Appearance-Localization
Features for Monocular 3D Object Detection [62.1185839286255]
Low-cost monocular 3D object detection plays a fundamental role in autonomous driving.
We introduce a Dynamic Feature Reflecting Network, named DFR-Net.
We rank 1st among all the monocular 3D object detectors in the KITTI test set.
arXiv Detail & Related papers (2021-12-28T07:31:18Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Self-supervised Learning of 3D Object Understanding by Data Association
and Landmark Estimation for Image Sequence [15.815583594196488]
3D object under-standing from 2D image is a challenging task that infers ad-ditional dimension from reduced-dimensional information.
It is challenging to obtain large amount of 3D dataset since achieving 3D annotation is expensive andtime-consuming.
We propose a strategy to exploit multipleobservations of the object in the image sequence in orderto surpass the self-performance.
arXiv Detail & Related papers (2021-04-14T18:59:08Z) - SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection [9.924083358178239]
We propose two variants of self-attention for contextual modeling in 3D object detection.
We first incorporate the pairwise self-attention mechanism into the current state-of-the-art BEV, voxel and point-based detectors.
Next, we propose a self-attention variant that samples a subset of the most representative features by learning deformations over randomly sampled locations.
arXiv Detail & Related papers (2021-01-07T18:30:32Z) - End-to-End 3D Multi-Object Tracking and Trajectory Forecasting [34.68114553744956]
We propose a unified solution for 3D MOT and trajectory forecasting.
We employ a feature interaction technique by introducing Graph Neural Networks.
We also use a diversity sampling function to improve the quality and diversity of our forecasted trajectories.
arXiv Detail & Related papers (2020-08-25T16:54:46Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.