Actionable Cybersecurity Notifications for Smart Homes: A User Study on the Role of Length and Complexity
- URL: http://arxiv.org/abs/2510.21508v1
- Date: Fri, 24 Oct 2025 14:36:35 GMT
- Title: Actionable Cybersecurity Notifications for Smart Homes: A User Study on the Role of Length and Complexity
- Authors: Victor Jüttner, Charlotte S. Löffler, Erik Buchmann,
- Abstract summary: Intrusion Detection Systems are a prominent approach to detecting cybersecurity threats.<n>Large Language Models can bridge this gap by translating IDS alerts into actionable security notifications.<n>It has not yet been clear what an actionable cybersecurity notification should look like.
- Score: 1.2744523252873352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of smart home devices has increased convenience but also introduced cybersecurity risks for everyday users, as many devices lack robust security features. Intrusion Detection Systems are a prominent approach to detecting cybersecurity threats. However, their alerts often use technical terms and require users to interpret them correctly, which is challenging for a typical smart home user. Large Language Models can bridge this gap by translating IDS alerts into actionable security notifications. However, it has not yet been clear what an actionable cybersecurity notification should look like. In this paper, we conduct an experimental online user study with 130 participants to examine how the length and complexity of LLM-generated notifications affect user likability, understandability, and motivation to act. Our results show that intermediate-complexity notifications are the most effective across all user groups, regardless of their technological proficiency. Across the board, users rated beginner-level messages as more effective when they were longer, while expert-level messages were rated marginally more effective when they were shorter. These findings provide insights for designing security notifications that are both actionable and broadly accessible to smart home users.
Related papers
- Just Ask: Curious Code Agents Reveal System Prompts in Frontier LLMs [65.6660735371212]
We present textbftextscJustAsk, a framework that autonomously discovers effective extraction strategies through interaction alone.<n>It formulates extraction as an online exploration problem, using Upper Confidence Bound--based strategy selection and a hierarchical skill space spanning atomic probes and high-level orchestration.<n>Our results expose system prompts as a critical yet largely unprotected attack surface in modern agent systems.
arXiv Detail & Related papers (2026-01-29T03:53:25Z) - CAHICHA: Computer Automated Hardware Interaction test to tell Computer and Humans Apart [0.16385815610837165]
Bots and scrapers with Artificial Intelligence (AI) capabilities can now detect and solve visual challenges, emulate human like typing patterns, and avoid most security tests.<n>This leaves a vital gap in identifying real human users versus advanced bots.<n>We present a novel technique for distinguishing real human users based on hardware interaction signals.
arXiv Detail & Related papers (2025-11-11T05:21:30Z) - Evaluating Language Model Reasoning about Confidential Information [95.64687778185703]
We study whether language models exhibit contextual robustness, or the capability to adhere to context-dependent safety specifications.<n>We develop a benchmark (PasswordEval) that measures whether language models can correctly determine when a user request is authorized.<n>We find that current open- and closed-source models struggle with this seemingly simple task, and that, perhaps surprisingly, reasoning capabilities do not generally improve performance.
arXiv Detail & Related papers (2025-08-27T15:39:46Z) - Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - Design Patterns for Securing LLM Agents against Prompt Injections [26.519964636138585]
prompt injection attacks exploit the agent's resilience on natural language inputs.<n>We propose a set of principled design patterns for building AI agents with provable resistance to prompt injection.
arXiv Detail & Related papers (2025-06-10T14:23:55Z) - Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users [0.0]
Recent approaches use large language models to rewrite brief, technical security alerts into intuitive language.<n>It remains an open question how well such alerts are explained to users.<n>In this work, we introduce the Human-Centered Security Alert Evaluation Framework (HCSAEF)
arXiv Detail & Related papers (2025-05-28T14:58:29Z) - A Systematic Review of Security Communication Strategies: Guidelines and Open Challenges [47.205801464292485]
We identify user difficulties including information overload, technical comprehension, and balancing security awareness with comfort.<n>Our findings reveal consistent communication paradoxes: users require technical details for credibility yet struggle with jargon and need risk awareness without experiencing anxiety.<n>This work contributes to more effective security communication practices that enable users to recognize and respond to cybersecurity threats appropriately.
arXiv Detail & Related papers (2025-04-02T20:18:38Z) - Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.<n>We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.<n>As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context [45.821481786228226]
We show that situation-driven adversarial full-prompts that leverage situational context are effective but much harder to detect.<n>We developed attacks that use movie scripts as situational contextual frameworks.<n>We enhanced the AdvPrompter framework with p-nucleus sampling to generate diverse human-readable adversarial texts.
arXiv Detail & Related papers (2024-12-20T21:43:52Z) - BreachSeek: A Multi-Agent Automated Penetration Tester [0.0]
BreachSeek is an AI-driven multi-agent software platform that identifies and exploits vulnerabilities without human intervention.
In preliminary evaluations, BreachSeek successfully exploited vulnerabilities in exploitable machines within local networks.
Future developments aim to expand its capabilities, positioning it as an indispensable tool for cybersecurity professionals.
arXiv Detail & Related papers (2024-08-31T19:15:38Z) - HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs) [0.09208007322096533]
We present HuntGPT, a specialized intrusion detection dashboard applying a Random Forest classifier.
The paper delves into the system's architecture, components, and technical accuracy, assessed through Certified Information Security Manager (CISM) Practice Exams.
The results demonstrate that conversational agents, supported by LLM and integrated with XAI, provide robust, explainable, and actionable AI solutions in intrusion detection.
arXiv Detail & Related papers (2023-09-27T20:58:13Z) - Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence [94.94833077653998]
ThreatRaptor is a system that facilitates threat hunting in computer systems using open-source Cyber Threat Intelligence (OSCTI)
It extracts structured threat behaviors from unstructured OSCTI text and uses a concise and expressive domain-specific query language, TBQL, to hunt for malicious system activities.
Evaluations on a broad set of attack cases demonstrate the accuracy and efficiency of ThreatRaptor in practical threat hunting.
arXiv Detail & Related papers (2020-10-26T14:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.