CP-Guard: Malicious Agent Detection and Defense in Collaborative Bird's Eye View Perception
- URL: http://arxiv.org/abs/2412.12000v2
- Date: Fri, 23 May 2025 13:58:57 GMT
- Title: CP-Guard: Malicious Agent Detection and Defense in Collaborative Bird's Eye View Perception
- Authors: Senkang Hu, Yihang Tao, Guowen Xu, Yiqin Deng, Xianhao Chen, Yuguang Fang, Sam Kwong,
- Abstract summary: Collaborative Perception (CP) has shown a promising technique for autonomous driving.<n>In CP, ego CAV needs to receive messages from its collaborators, which makes it easy to be attacked by malicious agents.<n>We propose CP-Guard, a tailored defense mechanism for CP that can be deployed by each agent to accurately detect and eliminate malicious agents.
- Score: 54.78412829889825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative Perception (CP) has shown a promising technique for autonomous driving, where multiple connected and autonomous vehicles (CAVs) share their perception information to enhance the overall perception performance and expand the perception range. However, in CP, ego CAV needs to receive messages from its collaborators, which makes it easy to be attacked by malicious agents. For example, a malicious agent can send harmful information to the ego CAV to mislead it. To address this critical issue, we propose a novel method, CP-Guard, a tailored defense mechanism for CP that can be deployed by each agent to accurately detect and eliminate malicious agents in its collaboration network. Our key idea is to enable CP to reach a consensus rather than a conflict against the ego CAV's perception results. Based on this idea, we first develop a probability-agnostic sample consensus (PASAC) method to effectively sample a subset of the collaborators and verify the consensus without prior probabilities of malicious agents. Furthermore, we define a collaborative consistency loss (CCLoss) to capture the discrepancy between the ego CAV and its collaborators, which is used as a verification criterion for consensus. Finally, we conduct extensive experiments in collaborative bird's eye view (BEV) tasks and our results demonstrate the effectiveness of our CP-Guard. Code is available at https://github.com/CP-Security/CP-Guard
Related papers
- CP-FREEZER: Latency Attacks against Vehicular Cooperative Perception [14.108193844485285]
We present CP-FREEZER, the first latency attack that maximizes the delay of CP algorithms by injecting adversarial perturbation via V2V messages.<n>Our findings reveal a critical threat to the availability of CP systems, highlighting the urgent need for robust defenses.
arXiv Detail & Related papers (2025-08-01T20:34:36Z) - CP-uniGuard: A Unified, Probability-Agnostic, and Adaptive Framework for Malicious Agent Detection and Defense in Multi-Agent Embodied Perception Systems [21.478631468402977]
Collaborative Perception (CP) has been shown to be a promising technique for multi-agent autonomous driving and multi-agent robotic systems.<n>In CP, an ego agent needs to receive messages from its collaborators, which makes it vulnerable to attacks from malicious agents.<n>We propose a unified, probability-agnostic, and adaptive framework, namely, CP-uniGuard, to accurately detect and eliminate malicious agents in its collaboration network.
arXiv Detail & Related papers (2025-06-28T14:02:14Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception [53.088988929450494]
Collaborative perception (CP) is a promising method for safe connected and autonomous driving.
We propose a new paradigm for malicious agent detection that effectively identifies malicious agents at the feature level.
We also develop a robust defense method called CP-Guard+, which enhances the margin between the representations of benign and malicious features.
arXiv Detail & Related papers (2025-02-07T12:58:45Z) - Uncertainty Quantification for Collaborative Object Detection Under Adversarial Attacks [6.535251134834875]
Collaborative Object Detection (COD) and collaborative perception can integrate data or features from various entities.
adversarial attacks pose a potential threat to the deep learning COD models.
We propose the Trusted Uncertainty Quantification in Collaborative Perception framework (TUQCP)
arXiv Detail & Related papers (2025-02-04T18:03:32Z) - GCP: Guarded Collaborative Perception with Spatial-Temporal Aware Malicious Agent Detection [11.336965062177722]
Collaborative perception is vulnerable to adversarial message attacks from malicious agents.
This paper reveals a novel blind area confusion (BAC) attack that compromises existing single-shot outlier-based detection methods.
We propose Guarded Collaborative Perception framework based on spatial-temporal aware malicious agent detection.
arXiv Detail & Related papers (2025-01-05T06:03:26Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - CMP: Cooperative Motion Prediction with Multi-Agent Communication [21.60646440715162]
This paper explores the feasibility and effectiveness of cooperative motion prediction.
Our method, CMP, takes LiDAR signals as model input to enhance tracking and prediction capabilities.
In particular, CMP reduces the average prediction error by 16.4% with fewer missing detections.
arXiv Detail & Related papers (2024-03-26T17:53:27Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Among Us: Adversarially Robust Collaborative Perception by Consensus [50.73128191202585]
Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals.
We propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers.
We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
arXiv Detail & Related papers (2023-03-16T17:15:25Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Challenging the adversarial robustness of DNNs based on error-correcting
output codes [33.46319608673487]
ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation.
adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class.
arXiv Detail & Related papers (2020-03-26T12:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.