Sequential Attacks on Kalman Filter-based Forward Collision Warning
Systems
- URL: http://arxiv.org/abs/2012.08704v1
- Date: Wed, 16 Dec 2020 02:26:27 GMT
- Title: Sequential Attacks on Kalman Filter-based Forward Collision Warning
Systems
- Authors: Yuzhe Ma, Jon Sharp, Ruizhe Wang, Earlence Fernandes, Xiaojin Zhu
- Abstract summary: We study adversarial attacks on Kalman Filter (KF) as part of the machine-human hybrid system of Forward Collision Warning.
Our attack goal is to negatively affect human braking decisions by causing KF to output incorrect state estimations.
We accomplish this by sequentially manipulating measure ments fed into the KF, and propose a novel Model Predictive Control (MPC) approach to compute the optimal manipulation.
- Score: 23.117910305213016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kalman Filter (KF) is widely used in various domains to perform sequential
learning or variable estimation. In the context of autonomous vehicles, KF
constitutes the core component of many Advanced Driver Assistance Systems
(ADAS), such as Forward Collision Warning (FCW). It tracks the states
(distance, velocity etc.) of relevant traffic objects based on sensor
measurements. The tracking output of KF is often fed into downstream logic to
produce alerts, which will then be used by human drivers to make driving
decisions in near-collision scenarios. In this paper, we study adversarial
attacks on KF as part of the more complex machine-human hybrid system of
Forward Collision Warning. Our attack goal is to negatively affect human
braking decisions by causing KF to output incorrect state estimations that lead
to false or delayed alerts. We accomplish this by sequentially manipulating
measure ments fed into the KF, and propose a novel Model Predictive Control
(MPC) approach to compute the optimal manipulation. Via experiments conducted
in a simulated driving environment, we show that the attacker is able to
successfully change FCW alert signals through planned manipulation over
measurements prior to the desired target time. These results demonstrate that
our attack can stealthily mislead a distracted human driver and cause vehicle
collisions.
Related papers
- Sensor Deprivation Attacks for Stealthy UAV Manipulation [51.9034385791934]
Unmanned Aerial Vehicles autonomously perform tasks with the use of state-of-the-art control algorithms.
In this work, we propose a multi-part.
Sensor Deprivation Attacks (SDAs), aiming to stealthily impact.
process control via sensor reconfiguration.
arXiv Detail & Related papers (2024-10-14T23:03:58Z) - Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach [5.036807309572884]
More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
arXiv Detail & Related papers (2023-10-26T01:22:10Z) - CAT: Closed-loop Adversarial Training for Safe End-to-End Driving [54.60865656161679]
Adversarial Training (CAT) is a framework for safe end-to-end driving in autonomous vehicles.
Cat aims to continuously improve the safety of driving agents by training the agent on safety-critical scenarios.
Cat can effectively generate adversarial scenarios countering the agent being trained.
arXiv Detail & Related papers (2023-10-19T02:49:31Z) - Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems [8.561553195784017]
This paper evaluates the security of the deep neural network based ACC systems under runtime perception attacks.
We present a context-aware strategy for the selection of the most critical times for triggering the attacks.
We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform.
arXiv Detail & Related papers (2023-07-18T03:12:03Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - Spatial-Temporal Anomaly Detection for Sensor Attacks in Autonomous
Vehicles [1.7188280334580195]
Time-of-flight (ToF) distance measurement devices are vulnerable to spoofing, triggering and false data injection attacks.
We propose a spatial-temporal anomaly detection model textitSTAnDS which incorporates a residual error spatial detector, with a time-based expected change detection.
arXiv Detail & Related papers (2022-12-15T12:21:27Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling [22.753164675538457]
We present the first systematic research on the security of object tracking in self-driving cars.
We prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism.
We propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions.
arXiv Detail & Related papers (2022-07-18T12:30:24Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - An NCAP-like Safety Indicator for Self-Driving Cars [2.741266294612776]
This paper proposes a mechanism to assess the safety of autonomous cars.
It assesses the car's safety in scenarios where the car must avoid collision with an adversary.
The safety measure, called Safe-Kamikaze Distance, computes the average similarity between sets of safe adversary's trajectories and kamikaze trajectories close to the safe trajectories.
arXiv Detail & Related papers (2021-04-02T02:39:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.