Virtually increasing the measurement frequency of LIDAR sensor utilizing
a single RGB camera
- URL: http://arxiv.org/abs/2302.05192v1
- Date: Fri, 10 Feb 2023 11:43:35 GMT
- Title: Virtually increasing the measurement frequency of LIDAR sensor utilizing
a single RGB camera
- Authors: Zoltan Rozsa and Tamas Sziranyi
- Abstract summary: This research suggests using a mono camera to virtually enhance the frame rate of LIDARs.
We achieve state-of-the-art performance on large public datasets in terms of accuracy and similarity to real measurements.
- Score: 1.3706331473063877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The frame rates of most 3D LIDAR sensors used in intelligent vehicles are
substantially lower than current cameras installed in the same vehicle. This
research suggests using a mono camera to virtually enhance the frame rate of
LIDARs, allowing the more frequent monitoring of dynamic objects in the
surroundings that move quickly. As a first step, dynamic object candidates are
identified and tracked in the camera frames. Following that, the LIDAR
measurement points of these items are found by clustering in the frustums of 2D
bounding boxes. Projecting these to the camera and tracking them to the next
camera frame can be used to create 3D-2D correspondences between different
timesteps. These correspondences between the last LIDAR frame and the actual
camera frame are used to solve the PnP (Perspective-n-Point) problem. Finally,
the estimated transformations are applied to the previously measured points to
generate virtual measurements. With the proposed estimation, if the ego
movement is known, not just static object position can be determined at
timesteps where camera measurement is available, but positions of dynamic
objects as well. We achieve state-of-the-art performance on large public
datasets in terms of accuracy and similarity to real measurements.
Related papers
- Line-based 6-DoF Object Pose Estimation and Tracking With an Event Camera [19.204896246140155]
Event cameras possess remarkable attributes such as high dynamic range, low latency, and resilience against motion blur.
We propose a line-based robust pose estimation and tracking method for planar or non-planar objects using an event camera.
arXiv Detail & Related papers (2024-08-06T14:36:43Z) - Joint 3D Shape and Motion Estimation from Rolling Shutter Light-Field
Images [2.0277446818410994]
We propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor.
Our method leverages the 3D information cues present in the light-field and the motion information provided by the rolling shutter effect.
We present a generic model for the imaging process of this sensor and a two-stage algorithm that minimizes the re-projection error.
arXiv Detail & Related papers (2023-11-02T15:08:18Z) - Delving into Motion-Aware Matching for Monocular 3D Object Tracking [81.68608983602581]
We find that the motion cue of objects along different time frames is critical in 3D multi-object tracking.
We propose MoMA-M3T, a framework that mainly consists of three motion-aware components.
We conduct extensive experiments on the nuScenes and KITTI datasets to demonstrate our MoMA-M3T achieves competitive performance against state-of-the-art methods.
arXiv Detail & Related papers (2023-08-22T17:53:58Z) - DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object
Detection and Tracking [67.34803048690428]
We propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem.
DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden.
It is flexible and practical that can be plugged into most camera-based 3D object detectors.
arXiv Detail & Related papers (2023-03-29T12:33:55Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on
Camera-LiDAR Fusion with Deep Association [8.34219107351442]
This paper proposes a robust camera-LiDAR fusion-based MOT method that achieves a good trade-off between accuracy and speed.
Our proposed method presents obvious advantages over the state-of-the-art MOT methods in terms of both tracking accuracy and processing speed.
arXiv Detail & Related papers (2022-02-24T13:36:29Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - 3D-CVF: Generating Joint Camera and LiDAR Features Using Cross-View
Spatial Feature Fusion for 3D Object Detection [10.507404260449333]
We propose a new architecture for fusing camera and LiDAR sensors for 3D object detection.
The proposed 3D-CVF achieves state-of-the-art performance in the KITTI benchmark.
arXiv Detail & Related papers (2020-04-27T08:34:46Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.