Multi-Agent Reinforcement Learning for Assessing False-Data Injection
Attacks on Transportation Networks
- URL: http://arxiv.org/abs/2312.14625v2
- Date: Wed, 6 Mar 2024 22:15:48 GMT
- Title: Multi-Agent Reinforcement Learning for Assessing False-Data Injection
Attacks on Transportation Networks
- Authors: Taha Eghtesad, Sirui Li, Yevgeniy Vorobeychik, Aron Laszka
- Abstract summary: We introduce a computational framework to find worst-case data-injection attacks against transportation networks.
First, we devise an adversarial model with a threat actor who can manipulate drivers by increasing the travel times that they perceive on certain roads.
Then, we employ hierarchical multi-agent reinforcement learning to find an approximate adversarial strategy for data manipulation.
- Score: 27.89472896063777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing reliance of drivers on navigation applications has made
transportation networks more susceptible to data-manipulation attacks by
malicious actors. Adversaries may exploit vulnerabilities in the data
collection or processing of navigation services to inject false information,
and to thus interfere with the drivers' route selection. Such attacks can
significantly increase traffic congestions, resulting in substantial waste of
time and resources, and may even disrupt essential services that rely on road
networks. To assess the threat posed by such attacks, we introduce a
computational framework to find worst-case data-injection attacks against
transportation networks. First, we devise an adversarial model with a threat
actor who can manipulate drivers by increasing the travel times that they
perceive on certain roads. Then, we employ hierarchical multi-agent
reinforcement learning to find an approximate optimal adversarial strategy for
data manipulation. We demonstrate the applicability of our approach through
simulating attacks on the Sioux Falls, ND network topology.
Related papers
- Navigating Connected Car Cybersecurity: Location Anomaly Detection with RAN Data [2.147995542780459]
Cyber-attacks, including hijacking and spoofing, pose significant threats to connected cars.
This paper presents a novel approach for identifying potential attacks through Radio Access Network (RAN) event monitoring.
The major contribution of this paper is a location anomaly detection module that identifies devices that appear in multiple locations simultaneously.
arXiv Detail & Related papers (2024-07-02T22:42:45Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Cross-Domain AI for Early Attack Detection and Defense Against Malicious Flows in O-RAN [5.196266559887213]
Cross-Domain Artificial Intelligence (AI) can be the key to address this, although its application in Open Radio Access Network (O-RAN) is still at its infancy.
Our results demonstrate the potential of the proposed approach, achieving an accuracy rate of 93%.
This approach not only bridges critical gaps in mobile network security but also showcases the potential of cross-domain AI in enhancing the efficacy of network security measures.
arXiv Detail & Related papers (2024-01-17T13:29:47Z) - Detecting stealthy cyberattacks on adaptive cruise control vehicles: A
machine learning approach [5.036807309572884]
More insidious attacks, which only slightly alter driving behavior, can result in network-wide increases in congestion, fuel consumption, and even crash risk without being easily detected.
We present a traffic model framework for three types of potential cyberattacks: malicious manipulation of vehicle control commands, false data injection attacks on sensor measurements, and denial-of-service (DoS) attacks.
A novel generative adversarial network (GAN)-based anomaly detection model is proposed for real-time identification of such attacks using vehicle trajectory data.
arXiv Detail & Related papers (2023-10-26T01:22:10Z) - Reinforcement Learning based Cyberattack Model for Adaptive Traffic
Signal Controller in Connected Transportation Systems [61.39400591328625]
In a connected transportation system, adaptive traffic signal controllers (ATSC) utilize real-time vehicle trajectory data received from vehicles to regulate green time.
This wirelessly connected ATSC increases cyber-attack surfaces and increases their vulnerability to various cyber-attack modes.
One such mode is a'sybil' attack in which an attacker creates fake vehicles in the network.
An RL agent is trained to learn an optimal rate of sybil vehicle injection to create congestion for an approach(s)
arXiv Detail & Related papers (2022-10-31T20:12:17Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.