CP-FREEZER: Latency Attacks against Vehicular Cooperative Perception
- URL: http://arxiv.org/abs/2508.01062v1
- Date: Fri, 01 Aug 2025 20:34:36 GMT
- Title: CP-FREEZER: Latency Attacks against Vehicular Cooperative Perception
- Authors: Chenyi Wang, Ruoyu Song, Raymond Muller, Jean-Philippe Monteuuis, Z. Berkay Celik, Jonathan Petit, Ryan Gerdes, Ming Li,
- Abstract summary: We present CP-FREEZER, the first latency attack that maximizes the delay of CP algorithms by injecting adversarial perturbation via V2V messages.<n>Our findings reveal a critical threat to the availability of CP systems, highlighting the urgent need for robust defenses.
- Score: 14.108193844485285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooperative perception (CP) enhances situational awareness of connected and autonomous vehicles by exchanging and combining messages from multiple agents. While prior work has explored adversarial integrity attacks that degrade perceptual accuracy, little is known about CP's robustness against attacks on timeliness (or availability), a safety-critical requirement for autonomous driving. In this paper, we present CP-FREEZER, the first latency attack that maximizes the computation delay of CP algorithms by injecting adversarial perturbation via V2V messages. Our attack resolves several unique challenges, including the non-differentiability of point cloud preprocessing, asynchronous knowledge of the victim's input due to transmission delays, and uses a novel loss function that effectively maximizes the execution time of the CP pipeline. Extensive experiments show that CP-FREEZER increases end-to-end CP latency by over $90\times$, pushing per-frame processing time beyond 3 seconds with a 100% success rate on our real-world vehicle testbed. Our findings reveal a critical threat to the availability of CP systems, highlighting the urgent need for robust defenses.
Related papers
- Pulse-Level Simulation of Crosstalk Attacks on Superconducting Quantum Hardware [0.0]
Hardware crosstalk in superconducting quantum computers poses a severe security threat.<n>We present a simulation-based study of active crosstalk attacks at the pulse level.<n>We identify the pulse and coupling configurations that cause the largest logical errors.
arXiv Detail & Related papers (2025-07-22T02:52:43Z) - CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception [53.088988929450494]
Collaborative perception (CP) is a promising method for safe connected and autonomous driving.<n>We propose a new paradigm for malicious agent detection that effectively identifies malicious agents at the feature level.<n>We also develop a robust defense method called CP-Guard+, which enhances the margin between the representations of benign and malicious features.
arXiv Detail & Related papers (2025-02-07T12:58:45Z) - GCP: Guarded Collaborative Perception with Spatial-Temporal Aware Malicious Agent Detection [11.336965062177722]
Collaborative perception is vulnerable to adversarial message attacks from malicious agents.<n>This paper reveals a novel blind area confusion (BAC) attack that compromises existing single-shot outlier-based detection methods.<n>We propose Guarded Collaborative Perception framework based on spatial-temporal aware malicious agent detection.
arXiv Detail & Related papers (2025-01-05T06:03:26Z) - CP-Guard: Malicious Agent Detection and Defense in Collaborative Bird's Eye View Perception [54.78412829889825]
Collaborative Perception (CP) has shown a promising technique for autonomous driving.<n>In CP, ego CAV needs to receive messages from its collaborators, which makes it easy to be attacked by malicious agents.<n>We propose CP-Guard, a tailored defense mechanism for CP that can be deployed by each agent to accurately detect and eliminate malicious agents.
arXiv Detail & Related papers (2024-12-16T17:28:25Z) - The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks [90.52808174102157]
In safety-critical applications such as medical imaging and autonomous driving, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks.
A notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models.
This study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks.
arXiv Detail & Related papers (2024-05-14T18:05:19Z) - V2X Cooperative Perception for Autonomous Driving: Recent Advances and Challenges [32.11627955649814]
Vehicle-to-everything (V2X) cooperative perception (CP) allows vehicles to share perception data, thereby enhancing situational awareness and overcoming the limitations of the sensing ability of individual vehicles.
V2X CP is crucial for extending perception range, improving accuracy, and strengthening the decision-making and control capabilities of autonomous vehicles in complex environments.
This paper provides a comprehensive survey of recent advances in V2X CP, introducing mathematical models of CP processes across various collaboration strategies.
arXiv Detail & Related papers (2023-10-05T13:19:48Z) - Visual Prompting for Adversarial Robustness [63.89295305670113]
We use visual prompting computation to improve adversarial robustness of a fixed, pre-trained model at testing time.
We propose a new VP method, termed Class-wise Adrial Visual Prompting (C-AVP), to generate class-wise visual prompts.
C-AVP outperforms the conventional VP method, with 2.1X standard accuracy gain and 2X robust accuracy gain.
arXiv Detail & Related papers (2022-10-12T15:06:07Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.