More Questions than Answers? Lessons from Integrating Explainable AI into a Cyber-AI Tool
- URL: http://arxiv.org/abs/2408.04746v1
- Date: Thu, 8 Aug 2024 20:09:31 GMT
- Title: More Questions than Answers? Lessons from Integrating Explainable AI into a Cyber-AI Tool
- Authors: Ashley Suh, Harry Li, Caitlin Kenney, Kenneth Alperin, Steven R. Gomez,
- Abstract summary: We describe a preliminary case study on the use of XAI for source code classification.
We find that the outputs of state-of-the-art saliency explanation techniques are lost in translation when interpreted by people with little AI expertise.
We outline unaddressed gaps in practical and effective XAI, then touch on how emerging technologies like Large Language Models (LLMs) could mitigate these existing obstacles.
- Score: 1.5711133309434766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We share observations and challenges from an ongoing effort to implement Explainable AI (XAI) in a domain-specific workflow for cybersecurity analysts. Specifically, we briefly describe a preliminary case study on the use of XAI for source code classification, where accurate assessment and timeliness are paramount. We find that the outputs of state-of-the-art saliency explanation techniques (e.g., SHAP or LIME) are lost in translation when interpreted by people with little AI expertise, despite these techniques being marketed for non-technical users. Moreover, we find that popular XAI techniques offer fewer insights for real-time human-AI workflows when they are post hoc and too localized in their explanations. Instead, we observe that cyber analysts need higher-level, easy-to-digest explanations that can offer as little disruption as possible to their workflows. We outline unaddressed gaps in practical and effective XAI, then touch on how emerging technologies like Large Language Models (LLMs) could mitigate these existing obstacles.
Related papers
- Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Is Task-Agnostic Explainable AI a Myth? [0.0]
Our work serves as a framework for unifying the challenges of contemporary explainable AI (XAI)
We demonstrate that while XAI methods provide supplementary and potentially useful output for machine learning models, researchers and decision-makers should be mindful of their conceptual and technical limitations.
We examine three XAI research avenues spanning image, textual, and graph data, covering saliency, attention, and graph-type explainers.
arXiv Detail & Related papers (2023-07-13T07:48:04Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.