SSC3OD: Sparsely Supervised Collaborative 3D Object Detection from LiDAR
Point Clouds
- URL: http://arxiv.org/abs/2307.00717v1
- Date: Mon, 3 Jul 2023 02:42:14 GMT
- Title: SSC3OD: Sparsely Supervised Collaborative 3D Object Detection from LiDAR
Point Clouds
- Authors: Yushan Han, Hui Zhang, Honglei Zhang and Yidong Li
- Abstract summary: We propose a sparsely supervised collaborative 3D object detection framework SSC3OD.
It only requires each agent to randomly label one object in the scene.
It can effectively improve the performance of sparsely supervised collaborative 3D object detectors.
- Score: 16.612824810651897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative 3D object detection, with its improved interaction advantage
among multiple agents, has been widely explored in autonomous driving. However,
existing collaborative 3D object detectors in a fully supervised paradigm
heavily rely on large-scale annotated 3D bounding boxes, which is
labor-intensive and time-consuming. To tackle this issue, we propose a sparsely
supervised collaborative 3D object detection framework SSC3OD, which only
requires each agent to randomly label one object in the scene. Specifically,
this model consists of two novel components, i.e., the pillar-based masked
autoencoder (Pillar-MAE) and the instance mining module. The Pillar-MAE module
aims to reason over high-level semantics in a self-supervised manner, and the
instance mining module generates high-quality pseudo labels for collaborative
detectors online. By introducing these simple yet effective mechanisms, the
proposed SSC3OD can alleviate the adverse impacts of incomplete annotations. We
generate sparse labels based on collaborative perception datasets to evaluate
our method. Extensive experiments on three large-scale datasets reveal that our
proposed SSC3OD can effectively improve the performance of sparsely supervised
collaborative 3D object detectors.
Related papers
- Diff3DETR:Agent-based Diffusion Model for Semi-supervised 3D Object Detection [33.58208166717537]
3D object detection is essential for understanding 3D scenes.
Recent developments in semi-supervised methods seek to mitigate this problem by employing a teacher-student framework to generate pseudo-labels for unlabeled point clouds.
We introduce an Agent-based Diffusion Model for Semi-supervised 3D Object Detection (Diff3DETR)
arXiv Detail & Related papers (2024-08-01T05:04:22Z) - Cross-Cluster Shifting for Efficient and Effective 3D Object Detection
in Autonomous Driving [69.20604395205248]
We present a new 3D point-based detector model, named Shift-SSD, for precise 3D object detection in autonomous driving.
We introduce an intriguing Cross-Cluster Shifting operation to unleash the representation capacity of the point-based detector.
We conduct extensive experiments on the KITTI, runtime, and nuScenes datasets, and the results demonstrate the state-of-the-art performance of Shift-SSD.
arXiv Detail & Related papers (2024-03-10T10:36:32Z) - PatchContrast: Self-Supervised Pre-training for 3D Object Detection [14.603858163158625]
We introduce PatchContrast, a novel self-supervised point cloud pre-training framework for 3D object detection.
We show that our method outperforms existing state-of-the-art models on three commonly-used 3D detection datasets.
arXiv Detail & Related papers (2023-08-14T07:45:54Z) - AOP-Net: All-in-One Perception Network for Joint LiDAR-based 3D Object
Detection and Panoptic Segmentation [9.513467995188634]
AOP-Net is a LiDAR-based multi-task framework that combines 3D object detection and panoptic segmentation.
The AOP-Net achieves state-of-the-art performance for published works on the nuScenes benchmark for both 3D object detection and panoptic segmentation tasks.
arXiv Detail & Related papers (2023-02-02T05:31:53Z) - CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection [57.44434974289945]
We propose Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D) framework.
Our framework takes a 3D scene as input and strives to explicitly integrate useful contextual information of the scene.
In addition to 3D object detection, we investigate the effectiveness of our framework for the problem of 3D object counting.
arXiv Detail & Related papers (2022-09-13T05:26:09Z) - Ret3D: Rethinking Object Relations for Efficient 3D Object Detection in
Driving Scenes [82.4186966781934]
We introduce a simple, efficient, and effective two-stage detector, termed as Ret3D.
At the core of Ret3D is the utilization of novel intra-frame and inter-frame relation modules.
With negligible extra overhead, Ret3D achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-08-18T03:48:58Z) - The Devil is in the Task: Exploiting Reciprocal Appearance-Localization
Features for Monocular 3D Object Detection [62.1185839286255]
Low-cost monocular 3D object detection plays a fundamental role in autonomous driving.
We introduce a Dynamic Feature Reflecting Network, named DFR-Net.
We rank 1st among all the monocular 3D object detectors in the KITTI test set.
arXiv Detail & Related papers (2021-12-28T07:31:18Z) - siaNMS: Non-Maximum Suppression with Siamese Networks for Multi-Camera
3D Object Detection [65.03384167873564]
A siamese network is integrated into the pipeline of a well-known 3D object detector approach.
associations are exploited to enhance the 3D box regression of the object.
The experimental evaluation on the nuScenes dataset shows that the proposed method outperforms traditional NMS approaches.
arXiv Detail & Related papers (2020-02-19T15:32:38Z) - SESS: Self-Ensembling Semi-Supervised 3D Object Detection [138.80825169240302]
We propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data.
Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.
arXiv Detail & Related papers (2019-12-26T08:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.