SoK: Explainable Machine Learning for Computer Security Applications
- URL: http://arxiv.org/abs/2208.10605v1
- Date: Mon, 22 Aug 2022 21:23:13 GMT
- Title: SoK: Explainable Machine Learning for Computer Security Applications
- Authors: Azqa Nadeem, Dani\"el Vos, Clinton Cao, Luca Pajola, Simon Dieck,
Robert Baumgartner, Sicco Verwer
- Abstract summary: We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for cybersecurity tasks.
We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, that utilize XAI for 5 different objectives within an ML pipeline.
- Score: 9.078841062353561
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) is a promising solution to improve
the transparency of machine learning (ML) pipelines. We systematize the
increasingly growing (but fragmented) microcosm of studies that develop and
utilize XAI methods for defensive and offensive cybersecurity tasks. We
identify 3 cybersecurity stakeholders, i.e., model users, designers, and
adversaries, that utilize XAI for 5 different objectives within an ML pipeline,
namely 1) XAI-enabled decision support, 2) applied XAI for security tasks, 3)
model verification via XAI, 4) explanation verification & robustness, and 5)
offensive use of explanations. We further classify the literature w.r.t. the
targeted security domain. Our analysis of the literature indicates that many of
the XAI applications are designed with little understanding of how they might
be integrated into analyst workflows -- user studies for explanation evaluation
are conducted in only 14% of the cases. The literature also rarely disentangles
the role of the various stakeholders. Particularly, the role of the model
designer is minimized within the security literature. To this end, we present
an illustrative use case accentuating the role of model designers. We
demonstrate cases where XAI can help in model verification and cases where it
may lead to erroneous conclusions instead. The systematization and use case
enable us to challenge several assumptions and present open problems that can
help shape the future of XAI within cybersecurity
Related papers
- More Questions than Answers? Lessons from Integrating Explainable AI into a Cyber-AI Tool [1.5711133309434766]
We describe a preliminary case study on the use of XAI for source code classification.
We find that the outputs of state-of-the-art saliency explanation techniques are lost in translation when interpreted by people with little AI expertise.
We outline unaddressed gaps in practical and effective XAI, then touch on how emerging technologies like Large Language Models (LLMs) could mitigate these existing obstacles.
arXiv Detail & Related papers (2024-08-08T20:09:31Z) - Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions [3.99098935469955]
Industry 5.0 focuses on human and Artificial Intelligence (AI) collaboration for performing different tasks in manufacturing.
The huge involvement of these devices and interconnection in various critical areas, such as economy, health, education and defense systems, poses several types of potential security flaws.
XAI has been proven a very effective and powerful tool in different areas of cybersecurity, such as intrusion detection, malware detection, and phishing detection.
arXiv Detail & Related papers (2024-07-21T09:28:05Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - Explainable Authorship Identification in Cultural Heritage Applications:
Analysis of a New Perspective [48.031678295495574]
We explore the applicability of existing general-purpose eXplainable Artificial Intelligence (XAI) techniques to AId.
In particular, we assess the relative merits of three different types of XAI techniques on three different AId tasks.
Our analysis shows that, while these techniques make important first steps towards explainable Authorship Identification, more work remains to be done.
arXiv Detail & Related papers (2023-11-03T20:51:15Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
Methods, Challenges, and Opportunities [0.0]
Intrusion Detection Systems (IDS) have received widespread adoption due to their ability to handle vast amounts of data with a high prediction accuracy.
IDSs designed using Deep Learning (DL) techniques are often treated as black box models and do not provide a justification for their predictions.
This survey reviews the state-of-the-art in explainable AI (XAI) for IDS, its current challenges, and discusses how these challenges span to the design of an X-IDS.
arXiv Detail & Related papers (2022-07-13T14:31:46Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - XAI for Cybersecurity: State of the Art, Challenges, Open Issues and
Future Directions [16.633632244131775]
AI models often appear as a blackbox wherein developers are unable to explain or trace back the reasoning behind a specific decision.
Explainable AI (XAI) is a rapid growing field of research which helps to extract information and also visualize the results.
The paper provides a brief overview on cybersecurity and the various forms of attack.
Then the use of traditional AI techniques and its associated challenges are discussed which opens its doors towards use of XAI in various applications.
arXiv Detail & Related papers (2022-06-03T02:15:30Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.