"What Keeps People Secure is That They Met The Security Team": Deconstructing Drivers And Goals of Organizational Security Awareness
- URL: http://arxiv.org/abs/2404.18365v1
- Date: Mon, 29 Apr 2024 02:10:35 GMT
- Title: "What Keeps People Secure is That They Met The Security Team": Deconstructing Drivers And Goals of Organizational Security Awareness
- Authors: Jonas Hielscher, Simon Parkin,
- Abstract summary: Security awareness campaigns in organizations now collectively cost billions of dollars annually.
Despite this, the basis of what security awareness managers do and what decides this are unclear.
We identify that success in awareness management is fragile while having the potential to improve.
- Score: 4.711430413139394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security awareness campaigns in organizations now collectively cost billions of dollars annually. There is increasing focus on ensuring certain security behaviors among employees. On the surface, this would imply a user-centered view of security in organizations. Despite this, the basis of what security awareness managers do and what decides this are unclear. We conducted n=15 semi-structured interviews with full-time security awareness managers, with experience across various national and international companies in European countries, with thousands of employees. Through thematic analysis, we identify that success in awareness management is fragile while having the potential to improve; there are a range of restrictions, and mismatched drivers and goals for security awareness, affecting how it is structured, delivered, measured, and improved. We find that security awareness as a practice is underspecified, and split between messaging around secure behaviors and connecting to employees, with a lack of recognition for the measures that awareness managers regard as important. We discuss ways forward, including alternative indicators of success, and security usability advocacy for employees.
Related papers
- AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - Individual and Contextual Variables of Cyber Security Behaviour -- An empirical analysis of national culture, industry, organisation, and individual variables of (in)secure human behaviour [0.0]
National culture, industry type, and organisational security culture play are influential variables of individuals' security behaviour.
Security awareness, security knowledge, and prior experience with security incidents are found to be influential variables of security behaviour.
Findings provide practical insights for organisations regarding the susceptibility of groups of people to insecure behaviour.
arXiv Detail & Related papers (2024-05-25T12:57:17Z) - Enhancing Security Awareness Through Gamified Approaches [0.21990652930491858]
Gamification is a new concept in the field of information security awareness training (SAT) campaigns.
This paper examines the effectiveness ofGamification in promoting security awareness among smart meter components for smart grid users/operators.
It can be demonstrated that the scores of participants in the three levels have improved by 40%, 35% and 29%, respectively.
arXiv Detail & Related papers (2024-04-13T17:32:05Z) - TrustAgent: Towards Safe and Trustworthy LLM-based Agents through Agent
Constitution [48.84353890821038]
This paper presents an Agent-Constitution-based agent framework, TrustAgent, an initial investigation into improving the safety of trustworthiness in LLM-based agents.
We demonstrate how pre-planning strategy injects safety knowledge to the model prior to plan generation, in-planning strategy bolsters safety during plan generation, and post-planning strategy ensures safety by post-planning inspection.
We explore the intricate relationships between safety and helpfulness, and between the model's reasoning ability and its efficacy as a safe agent.
arXiv Detail & Related papers (2024-02-02T17:26:23Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack,
Defense, and Evaluation of Multi-agent System Safety [73.51336434996931]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - From Compliance to Impact: Tracing the Transformation of an Organizational Security Awareness Program [3.3916160303055567]
We conduct a year-long case study of a security awareness program in a U.S. government agency.
Our findings reveal the challenges and practices involved in the progression of a security awareness program.
arXiv Detail & Related papers (2023-09-14T14:01:05Z) - Safety Margins for Reinforcement Learning [74.13100479426424]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Getting Users Smart Quick about Security: Results from 90 Minutes of
Using a Persuasive Toolkit for Facilitating Information Security Problem
Solving by Non-Professionals [2.4923006485141284]
A balanced level of user engagement in security is difficult to achieve due to difference of priorities between the business perspective and the security perspective.
We have developed a persuasive software toolkit to engage users in structured discussions about security vulnerabilities in their company.
In the research reported here we examine how non-professionals perceived security problems through a short-term use of the toolkit.
arXiv Detail & Related papers (2022-09-06T11:37:21Z) - Learning to Be Cautious [71.9871661858886]
A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations.
We present a sequence of tasks where cautious behavior becomes increasingly non-obvious, as well as an algorithm to demonstrate that it is possible for a system to emphlearn to be cautious.
arXiv Detail & Related papers (2021-10-29T16:52:45Z) - SMEs' Confidentiality Concerns for Security Information Sharing [1.3452510519858993]
Small and medium-sized enterprises are considered an essential part of the EU economy, however, highly vulnerable to cyberattacks.
This paper presents the results of semi-structured interviews with seven chief information security officers of SMEs to evaluate the impact of online consent communication on motivation for information sharing.
The findings demonstrate that online consent with multiple options for indicating a suitable level of agreement improved motivation for information sharing.
arXiv Detail & Related papers (2020-07-13T10:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.