A Cooperative Perception System Robust to Localization Errors
- URL: http://arxiv.org/abs/2210.06289v2
- Date: Wed, 26 Apr 2023 00:13:29 GMT
- Title: A Cooperative Perception System Robust to Localization Errors
- Authors: Zhiying Song, Fuxi Wen, Hailiang Zhang, Jun Li
- Abstract summary: We propose a distributed object-level cooperative perception system called OptiMatch.
The detected 3D bounding boxes and local state information are shared between the connected vehicles.
Experiment results show that the proposed framework outperforms the state-of-the-art benchmark fusion schemes.
- Score: 8.65435011972241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cooperative perception is challenging for safety-critical autonomous driving
applications.The errors in the shared position and pose cause an inaccurate
relative transform estimation and disrupt the robust mapping of the Ego
vehicle. We propose a distributed object-level cooperative perception system
called OptiMatch, in which the detected 3D bounding boxes and local state
information are shared between the connected vehicles. To correct the noisy
relative transform, the local measurements of both connected vehicles (bounding
boxes) are utilized, and an optimal transport theory-based algorithm is
developed to filter out those objects jointly detected by the vehicles along
with their correspondence, constructing an associated co-visible set. A
correction transform is estimated from the matched object pairs and further
applied to the noisy relative transform, followed by global fusion and dynamic
mapping. Experiment results show that robust performance is achieved for
different levels of location and heading errors, and the proposed framework
outperforms the state-of-the-art benchmark fusion schemes, including early,
late, and intermediate fusion, on average precision by a large margin when
location and/or heading errors occur.
Related papers
- AutoLayout: Closed-Loop Layout Synthesis via Slow-Fast Collaborative Reasoning [102.71841660031065]
Auto is a fully automated method that integrates a closed-loop self-validation process within a dual-system framework.<n>The effectiveness of Auto was validated across 8 distinct scenarios, where it demonstrated a significant 10.1% improvement over SOTA methods.
arXiv Detail & Related papers (2025-07-06T08:35:22Z) - Anomaly Detection in Cooperative Vehicle Perception Systems under Imperfect Communication [4.575903181579272]
We propose a cooperative-perception-based anomaly detection framework (CPAD)
CPAD is a robust architecture that remains effective under communication interruptions.
Empirical results demonstrate that our approach outperforms standard anomaly classification methods in F1-score, AUC.
arXiv Detail & Related papers (2025-01-28T22:41:06Z) - RoCo:Robust Collaborative Perception By Iterative Object Matching and Pose Adjustment [9.817492112784674]
Collaborative autonomous driving with multiple vehicles usually requires the data fusion from multiple modalities.
In collaborative perception, the quality of object detection based on a modality is highly sensitive to the relative pose errors among the agents.
We propose RoCo, a novel unsupervised framework to conduct iterative object matching and agent pose adjustment.
arXiv Detail & Related papers (2024-08-01T03:29:33Z) - Self-Localized Collaborative Perception [49.86110931859302]
We propose$mathttCoBEVGlue$, a novel self-localized collaborative perception system.
$mathttCoBEVGlue$ is a novel spatial alignment module, which provides the relative poses between agents.
$mathttCoBEVGlue$ achieves state-of-the-art detection performance under arbitrary localization noises and attacks.
arXiv Detail & Related papers (2024-06-18T15:26:54Z) - Robust Collaborative Perception without External Localization and Clock Devices [52.32342059286222]
A consistent spatial-temporal coordination across multiple agents is fundamental for collaborative perception.
Traditional methods depend on external devices to provide localization and clock signals.
We propose a novel approach: aligning by recognizing the inherent geometric patterns within the perceptual data of various agents.
arXiv Detail & Related papers (2024-05-05T15:20:36Z) - Self-supervised Adaptive Weighting for Cooperative Perception in V2V
Communications [11.772899644895281]
Cooperative perception is an effective approach to addressing the shortcomings of single-vehicle perception.
Current cooperative fusion models rely on supervised models and do not address dynamic performance degradation caused by arbitrary channel impairments.
A self-supervised adaptive weighting model is proposed for intermediate fusion to mitigate the adverse effects of channel distortion.
arXiv Detail & Related papers (2023-12-16T06:21:09Z) - Cooperative Perception with Learning-Based V2V communications [11.772899644895281]
This work analyzes the performance of cooperative perception accounting for communications channel impairments.
A new late fusion scheme is proposed to leverage the robustness of intermediate features.
In order to compress the data size incurred by cooperation, a convolution neural network-based autoencoder is adopted.
arXiv Detail & Related papers (2023-11-17T05:41:23Z) - Poses as Queries: Image-to-LiDAR Map Localization with Transformers [5.704968411509063]
High-precision vehicle localization with commercial setups is a crucial technique for high-level autonomous driving tasks.
Estimate pose by finding correspondences between such cross-modal sensor data is challenging.
We propose a novel Transformer-based neural network to register 2D images into 3D LiDAR map in an end-to-end manner.
arXiv Detail & Related papers (2023-05-07T14:57:58Z) - ECO-TR: Efficient Correspondences Finding Via Coarse-to-Fine Refinement [80.94378602238432]
We propose an efficient structure named Correspondence Efficient Transformer (ECO-TR) by finding correspondences in a coarse-to-fine manner.
To achieve this, multiple transformer blocks are stage-wisely connected to gradually refine the predicted coordinates.
Experiments on various sparse and dense matching tasks demonstrate the superiority of our method in both efficiency and effectiveness against existing state-of-the-arts.
arXiv Detail & Related papers (2022-09-25T13:05:33Z) - Robust Self-Supervised LiDAR Odometry via Representative Structure
Discovery and 3D Inherent Error Modeling [67.75095378830694]
We develop a two-stage odometry estimation network, where we obtain the ego-motion by estimating a set of sub-region transformations.
In this paper, we aim to alleviate the influence of unreliable structures in training, inference and mapping phases.
Our two-frame odometry outperforms the previous state of the arts by 16%/12% in terms of translational/rotational errors.
arXiv Detail & Related papers (2022-02-27T12:52:27Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.