YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection
- URL: http://arxiv.org/abs/2103.09422v1
- Date: Wed, 17 Mar 2021 03:43:54 GMT
- Title: YOLOStereo3D: A Step Back to 2D for Efficient Stereo 3D Detection
- Authors: Yuxuan Liu, Lujia Wang, Ming Liu
- Abstract summary: YOLOStereo3D is trained on one single GPU and runs at more than ten fps.
It demonstrates performance comparable to state-of-the-art stereo 3D detection frameworks without usage of LiDAR data.
- Score: 6.5702792909006735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection in 3D with stereo cameras is an important problem in
computer vision, and is particularly crucial in low-cost autonomous mobile
robots without LiDARs.
Nowadays, most of the best-performing frameworks for stereo 3D object
detection are based on dense depth reconstruction from disparity estimation,
making them extremely computationally expensive.
To enable real-world deployments of vision detection with binocular images,
we take a step back to gain insights from 2D image-based detection frameworks
and enhance them with stereo features.
We incorporate knowledge and the inference structure from real-time one-stage
2D/3D object detector and introduce a light-weight stereo matching module.
Our proposed framework, YOLOStereo3D, is trained on one single GPU and runs
at more than ten fps. It demonstrates performance comparable to
state-of-the-art stereo 3D detection frameworks without usage of LiDAR data.
The code will be published in https://github.com/Owen-Liuyuxuan/visualDet3D.
Related papers
- Weakly Supervised Monocular 3D Detection with a Single-View Image [58.57978772009438]
Monocular 3D detection aims for precise 3D object localization from a single-view image.
We propose SKD-WM3D, a weakly supervised monocular 3D detection framework.
We show that SKD-WM3D surpasses the state-of-the-art clearly and is even on par with many fully supervised methods.
arXiv Detail & Related papers (2024-02-29T13:26:47Z) - DSGN++: Exploiting Visual-Spatial Relation forStereo-based 3D Detectors [60.88824519770208]
Camera-based 3D object detectors are welcome due to their wider deployment and lower price than LiDAR sensors.
We revisit the prior stereo modeling DSGN about the stereo volume constructions for representing both 3D geometry and semantics.
We propose our approach, DSGN++, aiming for improving information flow throughout the 2D-to-3D pipeline.
arXiv Detail & Related papers (2022-04-06T18:43:54Z) - Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving [14.582107328849473]
The gap in image-to-image generation for stereo views is much smaller than that in image-to-LiDAR generation.
Motivated by this, we propose a Pseudo-Stereo 3D detection framework with three novel virtual view generation methods.
Our framework ranks 1st on car, pedestrian, and cyclist among the monocular 3D detectors with publications on the KITTI-3D benchmark.
arXiv Detail & Related papers (2022-03-04T03:00:34Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z) - LIGA-Stereo: Learning LiDAR Geometry Aware Representations for
Stereo-based 3D Detector [80.7563981951707]
We propose LIGA-Stereo to learn stereo-based 3D detectors under the guidance of high-level geometry-aware representations of LiDAR-based detection models.
Compared with the state-of-the-art stereo detector, our method has improved the 3D detection performance of cars, pedestrians, cyclists by 10.44%, 5.69%, 5.97% mAP respectively.
arXiv Detail & Related papers (2021-08-18T17:24:40Z) - FCOS3D: Fully Convolutional One-Stage Monocular 3D Object Detection [78.00922683083776]
It is non-trivial to make a general adapted 2D detector work in this 3D task.
In this technical report, we study this problem with a practice built on fully convolutional single-stage detector.
Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020.
arXiv Detail & Related papers (2021-04-22T09:35:35Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - DSGN: Deep Stereo Geometry Network for 3D Object Detection [79.16397166985706]
There is a large performance gap between image-based and LiDAR-based 3D object detectors.
Our method, called Deep Stereo Geometry Network (DSGN), significantly reduces this gap.
For the first time, we provide a simple and effective one-stage stereo-based 3D detection pipeline.
arXiv Detail & Related papers (2020-01-10T11:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.