Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
- URL: http://arxiv.org/abs/2407.15855v1
- Date: Sat, 6 Jul 2024 01:02:22 GMT
- Title: Data Poisoning Attacks in Intelligent Transportation Systems: A Survey
- Authors: Feilong Wang, Xin Wang, Xuegang Ban,
- Abstract summary: This paper concentrates on data poisoning attack models against ITS.
We identify the main ITS data sources vulnerable to poisoning attacks and application scenarios that enable staging such attacks.
Our work can serve as a guideline to better understand the threat of data poisoning attacks against ITS applications, while also giving a perspective on the future development of trustworthy ITS.
- Score: 8.27315203718422
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Emerging technologies drive the ongoing transformation of Intelligent Transportation Systems (ITS). This transformation has given rise to cybersecurity concerns, among which data poisoning attack emerges as a new threat as ITS increasingly relies on data. In data poisoning attacks, attackers inject malicious perturbations into datasets, potentially leading to inaccurate results in offline learning and real-time decision-making processes. This paper concentrates on data poisoning attack models against ITS. We identify the main ITS data sources vulnerable to poisoning attacks and application scenarios that enable staging such attacks. A general framework is developed following rigorous study process from cybersecurity but also considering specific ITS application needs. Data poisoning attacks against ITS are reviewed and categorized following the framework. We then discuss the current limitations of these attack models and the future research directions. Our work can serve as a guideline to better understand the threat of data poisoning attacks against ITS applications, while also giving a perspective on the future development of trustworthy ITS.
Related papers
- Enabling Privacy-Preserving Cyber Threat Detection with Federated Learning [4.475514208635884]
This study systematically profiles the (in)feasibility of learning for privacy-preserving cyber threat detection in terms of effectiveness, byzantine resilience, and efficiency.
It shows that FL-trained detection models can achieve a performance that is comparable to centrally trained counterparts.
Under a realistic threat model, FL turns out to be adversary-resistant to attacks of both data poisoning and model poisoning.
arXiv Detail & Related papers (2024-04-08T01:16:56Z) - Unscrambling the Rectification of Adversarial Attacks Transferability
across Computer Networks [4.576324217026666]
Convolutional neural networks (CNNs) models play a vital role in achieving state-of-the-art performances.
CNNs can be compromised because of their susceptibility to adversarial attacks.
We present a novel and comprehensive method to improve the strength of attacks and assess the transferability of adversarial examples in CNNs.
arXiv Detail & Related papers (2023-10-26T22:36:24Z) - Try to Avoid Attacks: A Federated Data Sanitization Defense for
Healthcare IoMT Systems [4.024567343465081]
The distribution of IoMT has the risk of protection from data poisoning attacks.
Poisoned data can be fabricated by falsifying medical data.
This paper introduces a Federated Data Sanitization Defense, a novel approach to protect the system from data poisoning attacks.
arXiv Detail & Related papers (2022-11-03T05:21:39Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - Wild Patterns Reloaded: A Survey of Machine Learning Security against
Training Data Poisoning [32.976199681542845]
We provide a comprehensive systematization of poisoning attacks and defenses in machine learning.
We start by categorizing the current threat models and attacks, and then organize existing defenses accordingly.
We argue that our systematization also encompasses state-of-the-art attacks and defenses for other data modalities.
arXiv Detail & Related papers (2022-05-04T11:00:26Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Defening against Adversarial Denial-of-Service Attacks [0.0]
Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies.
We propose a new approach of detecting DoS poisoned instances.
We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances.
arXiv Detail & Related papers (2021-04-14T09:52:36Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
Data Poisoning Attacks [74.88735178536159]
Data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks.
We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup.
We apply rigorous tests to determine the extent to which we should fear them.
arXiv Detail & Related papers (2020-06-22T18:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.