XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
Future Directions
- URL: http://arxiv.org/abs/2206.03585v1
- Date: Fri, 3 Jun 2022 02:15:30 GMT
- Title: XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
Future Directions
- Authors: Gautam Srivastava, Rutvij H Jhaveri, Sweta Bhattacharya, Sharnil
Pandya, Rajeswari, Praveen Kumar Reddy Maddikunta, Gokul Yenduri, Jon G.
Hall, Mamoun Alazab, Thippa Reddy Gadekallu
- Abstract summary: AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision.
Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualize the results.
The paper provides a brief overview on cybersecurity and the various forms of attack.
Then the use of traditional AI techniques and its associated challenges are discussed which opens its doors towards use of XAI in various applications.
- Score: 16.633632244131775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, artificial intelligence (AI) techniques have been
implemented in almost all verticals of human life. However, the results
generated from the AI models often lag explainability. AI models often appear
as a blackbox wherein developers are unable to explain or trace back the
reasoning behind a specific decision. Explainable AI (XAI) is a rapid growing
field of research which helps to extract information and also visualize the
results generated with an optimum transparency. The present study provides and
extensive review of the use of XAI in cybersecurity. Cybersecurity enables
protection of systems, networks and programs from different types of attacks.
The use of XAI has immense potential in predicting such attacks. The paper
provides a brief overview on cybersecurity and the various forms of attack.
Then the use of traditional AI techniques and its associated challenges are
discussed which opens its doors towards use of XAI in various applications. The
XAI implementations of various research projects and industry are also
presented. Finally, the lessons learnt from these applications are highlighted
which act as a guide for future scope of research.
Related papers
- A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - A Survey on Explainable Artificial Intelligence for Cybersecurity [14.648580959079787]
Explainable Artificial Intelligence (XAI) aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions.
In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats.
arXiv Detail & Related papers (2023-03-07T22:54:18Z) - Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review [0.799536002595393]
XAI aims to make the operation of AI algorithms more interpretable for its users and developers.
This work seeks to investigate the current research scenario on XAI applied to cybersecurity.
arXiv Detail & Related papers (2023-02-27T17:47:56Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Explainable Artificial Intelligence (XAI) for Internet of Things: A
Survey [1.7205106391379026]
Black-box nature of Artificial Intelligence (AI) models do not allow users to comprehend and sometimes trust the output created by such model.
In AI applications, where not only the results but also the decision paths to the results are critical, such black-box AI models are not sufficient.
Explainable Artificial Intelligence (XAI) addresses this problem and defines a set of AI models that are interpretable by the users.
arXiv Detail & Related papers (2022-06-07T08:22:30Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainable AI: current status and future directions [11.92436948211501]
Explainable Artificial Intelligence (XAI) is an emerging area of research in the field of Artificial Intelligence (AI)
XAI can explain how AI obtained a particular solution and can also answer other "wh" questions.
This paper provides an overview of these techniques from a multimedia (i.e., text, image, audio, and video) point of view.
arXiv Detail & Related papers (2021-07-12T08:42:19Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.