RMMDet: Road-Side Multitype and Multigroup Sensor Detection System for
Autonomous Driving
- URL: http://arxiv.org/abs/2303.05203v3
- Date: Sat, 10 Jun 2023 01:07:03 GMT
- Title: RMMDet: Road-Side Multitype and Multigroup Sensor Detection System for
Autonomous Driving
- Authors: Xiuyu Yang, Zhuangyan Zhang, Haikuo Du, Sui Yang, Fengping Sun, Yanbo
Liu, Ling Pei, Wenchao Xu, Weiqi Sun, Zhengyu Li
- Abstract summary: RMMDet is a road-side multitype and multigroup sensor detection system for autonomous driving.
We use a ROS-based virtual environment to simulate real-world conditions.
We produce local datasets and real sand table field, and conduct various experiments.
- Score: 3.8917150802484994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous driving has now made great strides thanks to artificial
intelligence, and numerous advanced methods have been proposed for vehicle end
target detection, including single sensor or multi sensor detection methods.
However, the complexity and diversity of real traffic situations necessitate an
examination of how to use these methods in real road conditions. In this paper,
we propose RMMDet, a road-side multitype and multigroup sensor detection system
for autonomous driving. We use a ROS-based virtual environment to simulate
real-world conditions, in particular the physical and functional construction
of the sensors. Then we implement muti-type sensor detection and multi-group
sensors fusion in this environment, including camera-radar and camera-lidar
detection based on result-level fusion. We produce local datasets and real sand
table field, and conduct various experiments. Furthermore, we link a
multi-agent collaborative scheduling system to the fusion detection system.
Hence, the whole roadside detection system is formed by roadside perception,
fusion detection, and scheduling planning. Through the experiments, it can be
seen that RMMDet system we built plays an important role in vehicle-road
collaboration and its optimization. The code and supplementary materials can be
found at: https://github.com/OrangeSodahub/RMMDet
Related papers
- Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Multi-Modal 3D Object Detection in Autonomous Driving: a Survey [10.913958563906931]
Self-driving cars are equipped with a suite of sensors to conduct robust and accurate environment perception.
As the number and type of sensors keep increasing, combining them for better perception is becoming a natural trend.
This survey devotes to review recent fusion-based 3D detection deep learning models that leverage multiple sensor data sources.
arXiv Detail & Related papers (2021-06-24T02:52:12Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure
Sensor Setups [0.0]
We investigate the impact of fused LiDAR point clouds compared to single LiDAR point clouds.
The evaluation of the extracted trajectories shows that a fused infrastructure approach significantly increases the tracking results and reaches accuracies within a few centimeters.
arXiv Detail & Related papers (2020-06-22T10:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.