Cirrus: A Long-range Bi-pattern LiDAR Dataset
- URL: http://arxiv.org/abs/2012.02938v1
- Date: Sat, 5 Dec 2020 03:18:31 GMT
- Title: Cirrus: A Long-range Bi-pattern LiDAR Dataset
- Authors: Ze Wang, Sihao Ding, Ying Li, Jonas Fenn, Sohini Roychowdhury, Andreas
Wallin, Lane Martin, Scott Ryvola, Guillermo Sapiro, and Qiang Qiu
- Abstract summary: We introduce Cirrus, a new long-range bi-pattern LiDAR public dataset for autonomous driving tasks.
Our platform is equipped with a high-resolution video camera and a pair of LiDAR sensors with a 250-meter effective range.
In Cirrus, eight categories of objects are exhaustively annotated in the LiDAR point clouds for the entire effective range.
- Score: 35.87501129332217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce Cirrus, a new long-range bi-pattern LiDAR public
dataset for autonomous driving tasks such as 3D object detection, critical to
highway driving and timely decision making. Our platform is equipped with a
high-resolution video camera and a pair of LiDAR sensors with a 250-meter
effective range, which is significantly longer than existing public datasets.
We record paired point clouds simultaneously using both Gaussian and uniform
scanning patterns. Point density varies significantly across such a long range,
and different scanning patterns further diversify object representation in
LiDAR. In Cirrus, eight categories of objects are exhaustively annotated in the
LiDAR point clouds for the entire effective range. To illustrate the kind of
studies supported by this new dataset, we introduce LiDAR model adaptation
across different ranges, scanning patterns, and sensor devices. Promising
results show the great potential of this new dataset to the robotics and
computer vision communities.
Related papers
- Sparse-to-Dense LiDAR Point Generation by LiDAR-Camera Fusion for 3D Object Detection [9.076003184833557]
We propose the LiDAR-Camera Augmentation Network (LCANet), a novel framework that reconstructs LiDAR point cloud data by fusing 2D image features.
LCANet fuses data from LiDAR sensors by projecting image features into the 3D space, integrating semantic information into the point cloud data.
This fusion effectively compensates for LiDAR's weakness in detecting objects at long distances, which are often represented by sparse points.
arXiv Detail & Related papers (2024-09-23T13:03:31Z) - Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene [22.297964850282177]
We propose LiDAR-2D Self-paced Learning (LiSe) for unsupervised 3D detection.
RGB images serve as a valuable complement to LiDAR data, offering precise 2D localization cues.
Our framework devises a self-paced learning pipeline that incorporates adaptive sampling and weak model aggregation strategies.
arXiv Detail & Related papers (2024-07-11T14:58:49Z) - Improving LiDAR 3D Object Detection via Range-based Point Cloud Density
Optimization [13.727464375608765]
Existing 3D object detectors tend to perform well on the point cloud regions closer to the LiDAR sensor as opposed to on regions that are farther away.
We observe that there is a learning bias in detection models towards the dense objects near the sensor and show that the detection performance can be improved by simply manipulating the input point cloud density at different distance ranges.
arXiv Detail & Related papers (2023-06-09T04:11:43Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - It's All Around You: Range-Guided Cylindrical Network for 3D Object
Detection [4.518012967046983]
This work presents a novel approach for analyzing 3D data produced by 360-degree depth scanners.
We introduce a novel notion of range-guided convolutions, adapting the receptive field by distance from the ego vehicle and the object's scale.
Our network demonstrates powerful results on the nuScenes challenge, comparable to current state-of-the-art architectures.
arXiv Detail & Related papers (2020-12-05T21:02:18Z) - Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds [76.52448276587707]
We propose Reconfigurable Voxels, a new approach to constructing representations from 3D point clouds.
Specifically, we devise a biased random walk scheme, which adaptively covers each neighborhood with a fixed number of voxels.
We find that this approach effectively improves the stability of voxel features, especially for sparse regions.
arXiv Detail & Related papers (2020-04-06T15:07:16Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.