BYOD Security: A Study of Human Dimensions
- URL: http://arxiv.org/abs/2202.11498v1
- Date: Wed, 23 Feb 2022 13:31:54 GMT
- Title: BYOD Security: A Study of Human Dimensions
- Authors: Kathleen Downer and Maumita Bhattacharya
- Abstract summary: The prevalence and maturity of Bring Your Own Device security along with subsequent frameworks and security mechanisms in Australian organisations is a growing phenomenon.
The aim of this paper is to discover, through a study conducted using a survey questionnaire instrument, how employees practice and perceive the BYOD security mechanisms deployed by Australian businesses.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalence and maturity of Bring Your Own Device (BYOD) security along
with subsequent frameworks and security mechanisms in Australian organisations
is a growing phenomenon somewhat similar to other developed nations. During the
COVID 19 pandemic, even organisations that were previously reluctant to embrace
BYOD have been forced to accept it to facilitate remote work. The aim of this
paper is to discover, through a study conducted using a survey questionnaire
instrument, how employees practice and perceive the BYOD security mechanisms
deployed by Australian businesses which can help guide the development of
future BYOD security frameworks. Three research questions are answered by this
study - What levels of awareness do Australian businesses have for BYOD
security aspects? How are employees currently responding to the security
mechanisms applied by their organisations for mobile devices? What are the
potential weaknesses in businesses IT networks that have a direct effect on
BYOD security? Overall, the aim of this research is to illuminate the findings
of these research objectives so that they can be used as a basis for building
new and strengthening existing BYOD security frameworks in order to enhance
their effectiveness against an ever-growing list of attacks and threats
targeting mobile devices in a virtually driven work force.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - MobileSafetyBench: Evaluating Safety of Autonomous Agents in Mobile Device Control [20.796190000442053]
We introduce MobileSafetyBench, a benchmark designed to evaluate the safety of device-control agents.
We develop a diverse set of tasks involving interactions with various mobile applications, including messaging and banking applications.
Our experiments demonstrate that while baseline agents, based on state-of-the-art LLMs, perform well in executing helpful tasks, they show poor performance in safety tasks.
arXiv Detail & Related papers (2024-10-23T02:51:43Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Safeguarding AI Agents: Developing and Analyzing Safety Architectures [0.0]
This paper addresses the need for safety measures in AI systems that collaborate with human teams.
We propose and evaluate three frameworks to enhance safety protocols in AI agent systems.
We conclude that these frameworks can significantly strengthen the safety and security of AI agent systems.
arXiv Detail & Related papers (2024-09-03T10:14:51Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Security of AI Agents [5.468745160706382]
The study and development of AI agents have been boosted by large language models.
In this paper, we identify and describe these vulnerabilities in detail from a system security perspective.
We introduce defense mechanisms corresponding to each vulnerability with meticulous design and experiments to evaluate their viability.
arXiv Detail & Related papers (2024-06-12T23:16:45Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Jailbroken: How Does LLM Safety Training Fail? [92.8748773632051]
"jailbreak" attacks on early releases of ChatGPT elicit undesired behavior.
We investigate why such attacks succeed and how they can be created.
New attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests.
arXiv Detail & Related papers (2023-07-05T17:58:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Developing Enterprise Cyber Situational Awareness [0.0]
The topic will focus on the U.S. Department of Defense strategy towards improving their network security defenses.
The approach will be analyzed to determine if DOD goals address any of their vulnerabilities towards protecting their networks.
arXiv Detail & Related papers (2020-09-03T18:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.