Toward Explainable Users: Using NLP to Enable AI to Understand Users'
Perceptions of Cyber Attacks
- URL: http://arxiv.org/abs/2106.01998v1
- Date: Thu, 3 Jun 2021 17:17:16 GMT
- Title: Toward Explainable Users: Using NLP to Enable AI to Understand Users'
Perceptions of Cyber Attacks
- Authors: Faranak Abri, Luis Felipe Gutierrez, Chaitra T. Kulkarni, Akbar Siami
Namin, Keith S. Jones
- Abstract summary: This paper is the first introducing the use of AI techniques in explaining and modeling users' behavior and their perceptions about a context.
To the best of our knowledge, this paper is the first introducing the use of AI techniques in explaining and modeling users' behavior and their perceptions about a context.
- Score: 2.099922236065961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To understand how end-users conceptualize consequences of cyber security
attacks, we performed a card sorting study, a well-known technique in Cognitive
Sciences, where participants were free to group the given consequences of
chosen cyber attacks into as many categories as they wished using rationales
they see fit. The results of the open card sorting study showed a large amount
of inter-participant variation making the research team wonder how the
consequences of security attacks were comprehended by the participants. As an
exploration of whether it is possible to explain user's mental model and
behavior through Artificial Intelligence (AI) techniques, the research team
compared the card sorting data with the outputs of a number of Natural Language
Processing (NLP) techniques with the goal of understanding how participants
perceived and interpreted the consequences of cyber attacks written in natural
languages. The results of the NLP-based exploration methods revealed an
interesting observation implying that participants had mostly employed checking
individual keywords in each sentence to group cyber attack consequences
together and less considered the semantics behind the description of
consequences of cyber attacks. The results reported in this paper are seemingly
useful and important for cyber attacks comprehension from user's perspectives.
To the best of our knowledge, this paper is the first introducing the use of AI
techniques in explaining and modeling users' behavior and their perceptions
about a context. The novel idea introduced here is about explaining users using
AI.
Related papers
- Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines [9.834055425277874]
This study investigates learner-AI interactions through an educational experiment in which participants receive structured guidance on effective prompting.
To assess user behavior and prompting efficacy, we analyze a dataset of 642 interactions from 107 users.
Our findings provide a deeper understanding of how users engage with Large Language Models and the role of structured prompting guidance in enhancing AI-assisted communication.
arXiv Detail & Related papers (2025-04-10T15:20:43Z) - Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks [45.23431596135002]
This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
arXiv Detail & Related papers (2024-11-15T13:22:04Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Unmasking the Shadows of AI: Investigating Deceptive Capabilities in Large Language Models [0.0]
This research critically navigates the intricate landscape of AI deception, concentrating on deceptive behaviours of Large Language Models (LLMs)
My objective is to elucidate this issue, examine the discourse surrounding it, and subsequently delve into its categorization and ramifications.
arXiv Detail & Related papers (2024-02-07T00:21:46Z) - A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Informing Autonomous Deception Systems with Cyber Expert Performance
Data [0.0]
This paper explores the potential to use Inverse Reinforcement Learning (IRL) to gain insight into attacker actions, utilities of those actions, and ultimately decision points which cyber deception could thwart.
The Tularosa study, as one example, provides experimental data of real-world techniques and tools commonly used by attackers, from which core data can be leveraged to inform an autonomous cyber defense system.
arXiv Detail & Related papers (2021-08-31T20:28:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.