3D Object Detection and High-Resolution Traffic Parameters Extraction
Using Low-Resolution LiDAR Data
- URL: http://arxiv.org/abs/2401.06946v1
- Date: Sat, 13 Jan 2024 01:22:20 GMT
- Title: 3D Object Detection and High-Resolution Traffic Parameters Extraction
Using Low-Resolution LiDAR Data
- Authors: Linlin Zhang, Xiang Yu, Armstrong Aboah, Yaw Adu-Gyamfi
- Abstract summary: This study proposes an innovative framework that alleviates the need for multiple LiDAR systems and simplifies the laborious 3D annotation process.
Using the 2D bounding box detection and extracted height information, this study is able to generate 3D bounding boxes automatically without human intervention.
- Score: 14.142956899468922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic volume data collection is a crucial aspect of transportation
engineering and urban planning, as it provides vital insights into traffic
patterns, congestion, and infrastructure efficiency. Traditional manual methods
of traffic data collection are both time-consuming and costly. However, the
emergence of modern technologies, particularly Light Detection and Ranging
(LiDAR), has revolutionized the process by enabling efficient and accurate data
collection. Despite the benefits of using LiDAR for traffic data collection,
previous studies have identified two major limitations that have impeded its
widespread adoption. These are the need for multiple LiDAR systems to obtain
complete point cloud information of objects of interest, as well as the
labor-intensive process of annotating 3D bounding boxes for object detection
tasks. In response to these challenges, the current study proposes an
innovative framework that alleviates the need for multiple LiDAR systems and
simplifies the laborious 3D annotation process. To achieve this goal, the study
employed a single LiDAR system, that aims at reducing the data acquisition cost
and addressed its accompanying limitation of missing point cloud information by
developing a Point Cloud Completion (PCC) framework to fill in missing point
cloud information using point density. Furthermore, we also used zero-shot
learning techniques to detect vehicles and pedestrians, as well as proposed a
unique framework for extracting low to high features from the object of
interest, such as height, acceleration, and speed. Using the 2D bounding box
detection and extracted height information, this study is able to generate 3D
bounding boxes automatically without human intervention.
Related papers
- STONE: A Submodular Optimization Framework for Active 3D Object Detection [20.54906045954377]
Key requirement for training an accurate 3D object detector is the availability of a large amount of LiDAR-based point cloud data.
This paper proposes a unified active 3D object detection framework, for greatly reducing the labeling cost of training 3D object detectors.
arXiv Detail & Related papers (2024-10-04T20:45:33Z) - Sparse-to-Dense LiDAR Point Generation by LiDAR-Camera Fusion for 3D Object Detection [9.076003184833557]
We propose the LiDAR-Camera Augmentation Network (LCANet), a novel framework that reconstructs LiDAR point cloud data by fusing 2D image features.
LCANet fuses data from LiDAR sensors by projecting image features into the 3D space, integrating semantic information into the point cloud data.
This fusion effectively compensates for LiDAR's weakness in detecting objects at long distances, which are often represented by sparse points.
arXiv Detail & Related papers (2024-09-23T13:03:31Z) - 4D Contrastive Superflows are Dense 3D Representation Learners [62.433137130087445]
We introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing pretraining objectives.
To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances alignment of the knowledge distilled from camera views.
arXiv Detail & Related papers (2024-07-08T17:59:54Z) - Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Label-Efficient 3D Object Detection For Road-Side Units [10.663986706501188]
Collaborative perception can enhance the perception of autonomous vehicles via deep information fusion with intelligent roadside units (RSU)
The data-hungry nature of these methods creates a major hurdle for their real-world deployment, particularly due to the need for annotated RSU data.
We devise a label-efficient object detection method for RSU based on unsupervised object discovery.
arXiv Detail & Related papers (2024-04-09T12:29:16Z) - TimePillars: Temporally-Recurrent 3D LiDAR Object Detection [8.955064958311517]
TimePillars is a temporally-recurrent object detection pipeline.
It exploits the pillar representation of LiDAR data across time.
We show how basic building blocks are enough to achieve robust and efficient results.
arXiv Detail & Related papers (2023-12-22T10:25:27Z) - PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection [66.94819989912823]
We propose a point-trajectory transformer with long short-term memory for efficient temporal 3D object detection.
We use point clouds of current-frame objects and their historical trajectories as input to minimize the memory bank storage requirement.
We conduct extensive experiments on the large-scale dataset to demonstrate that our approach performs well against state-of-the-art methods.
arXiv Detail & Related papers (2023-12-13T18:59:13Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - A Survey of Robust 3D Object Detection Methods in Point Clouds [2.1655448059430222]
We describe novel data augmentation methods, sampling strategies, activation functions, attention mechanisms, and regularization methods.
We evaluate novel 3D object detectors on the KITTI, nuScenes, and dataset.
We mention the current challenges in 3D object detection in LiDAR point clouds and list some open issues.
arXiv Detail & Related papers (2022-03-31T21:41:32Z) - Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception [59.2014692323323]
Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them.
We propose a novel, end-to-end trainable Hindsight framework to extract contextual information from past data.
We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets.
arXiv Detail & Related papers (2022-03-22T00:58:27Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.