Adaptive Anomaly Detection for Identifying Attacks in Cyber-Physical Systems: A Systematic Literature Review
- URL: http://arxiv.org/abs/2411.14278v1
- Date: Thu, 21 Nov 2024 16:32:02 GMT
- Title: Adaptive Anomaly Detection for Identifying Attacks in Cyber-Physical Systems: A Systematic Literature Review
- Authors: Pablo Moriano, Steven C. Hespeler, Mingyan Li, Maria Mahbub,
- Abstract summary: We present a systematic literature review ( SLR) on Adaptive anomaly detection (AAD) research.
AAD is among the most promising techniques to detect evolving cyberattacks.
We introduce a novel taxonomy considering attack types, CPS application, learning paradigm, data management, and algorithms.
We aim to help researchers to advance the state of the art and help practitioners to become familiar with recent progress in this field.
- Score: 4.580544659826873
- License:
- Abstract: Modern cyberattacks in cyber-physical systems (CPS) rapidly evolve and cannot be deterred effectively with most current methods which focused on characterizing past threats. Adaptive anomaly detection (AAD) is among the most promising techniques to detect evolving cyberattacks focused on fast data processing and model adaptation. AAD has been researched in the literature extensively; however, to the best of our knowledge, our work is the first systematic literature review (SLR) on the current research within this field. We present a comprehensive SLR, gathering 397 relevant papers and systematically analyzing 65 of them (47 research and 18 survey papers) on AAD in CPS studies from 2013 to 2023 (November). We introduce a novel taxonomy considering attack types, CPS application, learning paradigm, data management, and algorithms. Our analysis indicates, among other findings, that reviewed works focused on a single aspect of adaptation (either data processing or model adaptation) but rarely in both at the same time. We aim to help researchers to advance the state of the art and help practitioners to become familiar with recent progress in this field. We identify the limitations of the state of the art and provide recommendations for future research directions.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - An Investigation into the Performances of the State-of-the-art Machine Learning Approaches for Various Cyber-attack Detection: A Survey [1.1881667010191568]
We analyzed the suitability of each of the current state-of-the-art machine learning models for various cyberattack detection from the past 5 years.
We also reviewed the suitability, effeciency and limitations of recent research on state-of-the-art classifiers and novel frameworks in the detection of differnet cyberattacks.
arXiv Detail & Related papers (2024-02-26T22:04:25Z) - Topological safeguard for evasion attack interpreting the neural
networks' behavior [0.0]
In this work, a novel detector of evasion attacks is developed.
It focuses on the information of the activations of the neurons given by the model when an input sample is injected.
For this purpose, a huge data preprocessing is required to introduce all this information in the detector.
arXiv Detail & Related papers (2024-02-12T08:39:40Z) - Resilience of Deep Learning applications: a systematic literature review of analysis and hardening techniques [3.265458968159693]
The review is based on 220 scientific articles published between January 2019 and March 2024.
The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities.
arXiv Detail & Related papers (2023-09-27T19:22:19Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Intrusion Detection Systems Using Support Vector Machines on the
KDDCUP'99 and NSL-KDD Datasets: A Comprehensive Survey [6.847009696437944]
We focus on studies that have been evaluated on the two most widely used datasets in cybersecurity namely: the KDDCUP'99 and the NSL-KDD datasets.
We provide a summary of each method, identifying the role of the SVMs, and all other algorithms involved in the studies.
arXiv Detail & Related papers (2022-09-12T20:02:12Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Anomalous Example Detection in Deep Learning: A Survey [98.2295889723002]
This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for Deep Learning applications.
We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches.
We highlight the unsolved research challenges while applying anomaly detection techniques in DL systems and present some high-impact future research directions.
arXiv Detail & Related papers (2020-03-16T02:47:23Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.