Tackling Cyberattacks through AI-based Reactive Systems: A Holistic Review and Future Vision
- URL: http://arxiv.org/abs/2312.06229v2
- Date: Wed, 29 May 2024 11:14:12 GMT
- Title: Tackling Cyberattacks through AI-based Reactive Systems: A Holistic Review and Future Vision
- Authors: Sergio Bernardez Molina, Pantaleone Nespoli, Félix Gómez Mármol,
- Abstract summary: This paper presents a comprehensive survey of recent advancements in AI-driven threat response systems.
The most recent survey covering the AI reaction domain was conducted in 2017.
A total of seven research challenges have been identified, pointing out potential gaps and suggesting possible areas of development.
- Score: 0.10923877073891446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is no denying that the use of Information Technology (IT) is undergoing exponential growth in today's world. This digital transformation has also given rise to a multitude of security challenges, notably in the realm of cybercrime. In response to these growing threats, public and private sectors have prioritized the strengthening of IT security measures. In light of the growing security concern, Artificial Intelligence (AI) has gained prominence within the cybersecurity landscape. This paper presents a comprehensive survey of recent advancements in AI-driven threat response systems. To the best of our knowledge, the most recent survey covering the AI reaction domain was conducted in 2017. Since then, considerable literature has been published, and therefore, it is worth reviewing it. In this comprehensive survey of the state of the art reaction systems, five key features with multiple values have been identified, facilitating a homogeneous comparison between the different works. In addition, through a meticulous methodology of article collection, the 22 most relevant publications in the field have been selected. Then each of these publications has been subjected to a detailed analysis using the features identified, which has allowed for the generation of a comprehensive overview revealing significant relationships between the papers. These relationships are further elaborated in the paper, along with the identification of potential gaps in the literature, which may guide future contributions. A total of seven research challenges have been identified, pointing out these potential gaps and suggesting possible areas of development through concrete proposals.
Related papers
- Blockchain Based Information Security and Privacy Protection: Challenges and Future Directions using Computational Literature Review [1.3864583085700581]
blockchain technology has gained immense popularity in enhancing individual security and privacy.
Rapid proliferation of published research articles presents challenges for manual analysis and synthesis.
We identify 10 topics related to security and privacy and provide a detailed description of each topic.
arXiv Detail & Related papers (2024-09-22T14:41:43Z) - A Comprehensive Survey of Advanced Persistent Threat Attribution: Taxonomy, Methods, Challenges and Open Research Problems [3.410195565199523]
Advanced Persistent Threat attribution is a critical challenge in cybersecurity.
With the growing prominence of artificial intelligence (AI) and machine learning (ML) techniques, researchers are increasingly focused on developing automated solutions to link cyber threats to responsible actors.
Previous literature on automated threat attribution lacks a systematic review of automated methods and relevant artifacts that can aid in the attribution process.
arXiv Detail & Related papers (2024-09-07T12:42:43Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Generative AI for Secure Physical Layer Communications: A Survey [80.0638227807621]
Generative Artificial Intelligence (GAI) stands at the forefront of AI innovation, demonstrating rapid advancement and unparalleled proficiency in generating diverse content.
In this paper, we offer an extensive survey on the various applications of GAI in enhancing security within the physical layer of communication networks.
We delve into the roles of GAI in addressing challenges of physical layer security, focusing on communication confidentiality, authentication, availability, resilience, and integrity.
arXiv Detail & Related papers (2024-02-21T06:22:41Z) - A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research [23.966640472958105]
This paper presents a systematic literature review of approaches that aim to improve the explainability of AI models within the context of Software Engineering.
We aim to summarize the SE tasks where XAI techniques have shown success to date; (2) classify and analyze different XAI techniques; and (3) investigate existing evaluation approaches.
arXiv Detail & Related papers (2024-01-26T03:20:40Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - AI Security for Geoscience and Remote Sensing: Challenges and Future
Trends [16.001238774325333]
This paper reviews the current development of AI security in the geoscience and remote sensing field.
It covers the following five important aspects: adversarial attack, backdoor attack, federated learning, uncertainty and explainability.
To the best of the authors' knowledge, this paper is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community.
arXiv Detail & Related papers (2022-12-19T10:54:51Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - On the Importance of Domain-specific Explanations in AI-based
Cybersecurity Systems (Technical Report) [7.316266670238795]
Lack of understanding of such decisions can be a major drawback in critical domains such as those related to cybersecurity.
In this paper we make three contributions: (i) proposal and discussion of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a comparative analysis of approaches in the literature on Explainable Artificial Intelligence (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; and (iii) a general architecture that can serve as a roadmap for guiding research efforts towards the development of explainable AI-based cybersecurity systems.
arXiv Detail & Related papers (2021-08-02T22:55:13Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.