"What Keeps People Secure is That They Met The Security Team": Deconstructing Drivers And Goals of Organizational Security Awareness
- URL: http://arxiv.org/abs/2404.18365v1
- Date: Mon, 29 Apr 2024 02:10:35 GMT
- Title: "What Keeps People Secure is That They Met The Security Team": Deconstructing Drivers And Goals of Organizational Security Awareness
- Authors: Jonas Hielscher, Simon Parkin,
- Abstract summary: Security awareness campaigns in organizations now collectively cost billions of dollars annually.
Despite this, the basis of what security awareness managers do and what decides this are unclear.
We identify that success in awareness management is fragile while having the potential to improve.
- Score: 4.711430413139394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security awareness campaigns in organizations now collectively cost billions of dollars annually. There is increasing focus on ensuring certain security behaviors among employees. On the surface, this would imply a user-centered view of security in organizations. Despite this, the basis of what security awareness managers do and what decides this are unclear. We conducted n=15 semi-structured interviews with full-time security awareness managers, with experience across various national and international companies in European countries, with thousands of employees. Through thematic analysis, we identify that success in awareness management is fragile while having the potential to improve; there are a range of restrictions, and mismatched drivers and goals for security awareness, affecting how it is structured, delivered, measured, and improved. We find that security awareness as a practice is underspecified, and split between messaging around secure behaviors and connecting to employees, with a lack of recognition for the measures that awareness managers regard as important. We discuss ways forward, including alternative indicators of success, and security usability advocacy for employees.
Related papers
- LabSafety Bench: Benchmarking LLMs on Safety Issues in Scientific Labs [80.45174785447136]
Laboratory accidents pose significant risks to human life and property.
Despite advancements in safety training, laboratory personnel may still unknowingly engage in unsafe practices.
There is a growing concern about large language models (LLMs) for guidance in various fields.
arXiv Detail & Related papers (2024-10-18T05:21:05Z) - Multimodal Situational Safety [73.63981779844916]
We present the first evaluation and analysis of a novel safety challenge termed Multimodal Situational Safety.
For an MLLM to respond safely, whether through language or action, it often needs to assess the safety implications of a language query within its corresponding visual context.
We develop the Multimodal Situational Safety benchmark (MSSBench) to assess the situational safety performance of current MLLMs.
arXiv Detail & Related papers (2024-10-08T16:16:07Z) - Cybersecurity Challenge Analysis of Work-from-Anywhere (WFA) and Recommendations guided by a User Study [1.1749564892273827]
Many organizations were forced to quickly transition to the work-from-anywhere (WFA) model as a necessity to continue with their operations and remain in business despite the restrictions imposed during the COVID-19 pandemic.
This paper attempts to uncover some challenges and implications related to the cybersecurity of the WFA model.
We conducted an online user study to investigate the readiness and cybersecurity awareness of employers and their employees who shifted to work remotely from anywhere.
arXiv Detail & Related papers (2024-09-11T18:47:04Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Individual and Contextual Variables of Cyber Security Behaviour -- An empirical analysis of national culture, industry, organisation, and individual variables of (in)secure human behaviour [0.0]
National culture, industry type, and organisational security culture play are influential variables of individuals' security behaviour.
Security awareness, security knowledge, and prior experience with security incidents are found to be influential variables of security behaviour.
Findings provide practical insights for organisations regarding the susceptibility of groups of people to insecure behaviour.
arXiv Detail & Related papers (2024-05-25T12:57:17Z) - Enhancing Security Awareness Through Gamified Approaches [0.21990652930491858]
Gamification is a new concept in the field of information security awareness training (SAT) campaigns.
This paper examines the effectiveness ofGamification in promoting security awareness among smart meter components for smart grid users/operators.
It can be demonstrated that the scores of participants in the three levels have improved by 40%, 35% and 29%, respectively.
arXiv Detail & Related papers (2024-04-13T17:32:05Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - From Compliance to Impact: Tracing the Transformation of an Organizational Security Awareness Program [3.3916160303055567]
We conduct a year-long case study of a security awareness program in a U.S. government agency.
Our findings reveal the challenges and practices involved in the progression of a security awareness program.
arXiv Detail & Related papers (2023-09-14T14:01:05Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Getting Users Smart Quick about Security: Results from 90 Minutes of
Using a Persuasive Toolkit for Facilitating Information Security Problem
Solving by Non-Professionals [2.4923006485141284]
A balanced level of user engagement in security is difficult to achieve due to difference of priorities between the business perspective and the security perspective.
We have developed a persuasive software toolkit to engage users in structured discussions about security vulnerabilities in their company.
In the research reported here we examine how non-professionals perceived security problems through a short-term use of the toolkit.
arXiv Detail & Related papers (2022-09-06T11:37:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.