Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review
- URL: http://arxiv.org/abs/2303.01259v1
- Date: Mon, 27 Feb 2023 17:47:56 GMT
- Title: Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review
- Authors: Carlos Mendes and Tatiane Nogueira Rios
- Abstract summary: XAI aims to make the operation of AI algorithms more interpretable for its users and developers.
This work seeks to investigate the current research scenario on XAI applied to cybersecurity.
- Score: 0.799536002595393
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cybersecurity vendors consistently apply AI (Artificial Intelligence) to
their solutions and many cybersecurity domains can benefit from AI technology.
However, black-box AI techniques present some difficulties in comprehension and
adoption by its operators, given that their decisions are not always humanly
understandable (as is usually the case with deep neural networks, for example).
Since it aims to make the operation of AI algorithms more interpretable for its
users and developers, XAI (eXplainable Artificial Intelligence) can be used to
address this issue. Through a systematic literature review, this work seeks to
investigate the current research scenario on XAI applied to cybersecurity,
aiming to discover which XAI techniques have been applied in cybersecurity, and
which areas of cybersecurity have already benefited from this technology.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions [3.99098935469955]
Industry 5.0 focuses on human and Artificial Intelligence (AI) collaboration for performing different tasks in manufacturing.
The huge involvement of these devices and interconnection in various critical areas, such as economy, health, education and defense systems, poses several types of potential security flaws.
XAI has been proven a very effective and powerful tool in different areas of cybersecurity, such as intrusion detection, malware detection, and phishing detection.
arXiv Detail & Related papers (2024-07-21T09:28:05Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - A Survey on Explainable Artificial Intelligence for Cybersecurity [14.648580959079787]
Explainable Artificial Intelligence (XAI) aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions.
In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats.
arXiv Detail & Related papers (2023-03-07T22:54:18Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
Future Directions [16.633632244131775]
AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision.
Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualize the results.
The paper provides a brief overview on cybersecurity and the various forms of attack.
Then the use of traditional AI techniques and its associated challenges are discussed which opens its doors towards use of XAI in various applications.
arXiv Detail & Related papers (2022-06-03T02:15:30Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.