BYOD Security: A Study of Human Dimensions
- URL: http://arxiv.org/abs/2202.11498v1
- Date: Wed, 23 Feb 2022 13:31:54 GMT
- Title: BYOD Security: A Study of Human Dimensions
- Authors: Kathleen Downer and Maumita Bhattacharya
- Abstract summary: The prevalence and maturity of Bring Your Own Device security along with subsequent frameworks and security mechanisms in Australian organisations is a growing phenomenon.
The aim of this paper is to discover, through a study conducted using a survey questionnaire instrument, how employees practice and perceive the BYOD security mechanisms deployed by Australian businesses.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalence and maturity of Bring Your Own Device (BYOD) security along
with subsequent frameworks and security mechanisms in Australian organisations
is a growing phenomenon somewhat similar to other developed nations. During the
COVID 19 pandemic, even organisations that were previously reluctant to embrace
BYOD have been forced to accept it to facilitate remote work. The aim of this
paper is to discover, through a study conducted using a survey questionnaire
instrument, how employees practice and perceive the BYOD security mechanisms
deployed by Australian businesses which can help guide the development of
future BYOD security frameworks. Three research questions are answered by this
study - What levels of awareness do Australian businesses have for BYOD
security aspects? How are employees currently responding to the security
mechanisms applied by their organisations for mobile devices? What are the
potential weaknesses in businesses IT networks that have a direct effect on
BYOD security? Overall, the aim of this research is to illuminate the findings
of these research objectives so that they can be used as a basis for building
new and strengthening existing BYOD security frameworks in order to enhance
their effectiveness against an ever-growing list of attacks and threats
targeting mobile devices in a virtually driven work force.
Related papers
- A False Sense of Safety: Unsafe Information Leakage in 'Safe' AI Responses [42.136793654338106]
Large Language Models (LLMs) are vulnerable to leakages at$x2013$x2013$methods.
We introduce an inferential threat model called inferential adversaries who exploit impermissible information to achieve malicious goals.
Our work provides the first theoretically grounded understanding of the requirements for releasing safe jailbreaks and the utility costs involved.
arXiv Detail & Related papers (2024-07-02T16:19:25Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z) - Security of AI Agents [5.468745160706382]
The study and development of AI agents have been boosted by large language models.
In this paper, we identify and describe these vulnerabilities in detail from a system security perspective.
We introduce defense mechanisms corresponding to each vulnerability with meticulous design and experiments to evaluate their viability.
arXiv Detail & Related papers (2024-06-12T23:16:45Z) - Managing Security Evidence in Safety-Critical Organizations [10.905169282633256]
This paper presents a study on the maturity of managing security evidence in safety-critical organizations.
We find that the current maturity of managing security evidence is insufficient for the increasing requirements set by certification authorities and standardization bodies.
One part of the reason are educational gaps, the other a lack of processes.
arXiv Detail & Related papers (2024-04-26T11:30:34Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Blockchain-Based Security Architecture for Unmanned Aerial Vehicles in B5G/6G Services and Beyond: A Comprehensive Approach [4.552065156611815]
Unmanned Aerial Vehicles (UAVs) have evolved into indispensable tools for effectively managing disasters and responding to emergencies.
It is substantial to identify and consider the different security challenges in the research and development associated with advanced UAV-based B5G/6G architectures.
arXiv Detail & Related papers (2023-12-12T01:55:04Z) - Jailbroken: How Does LLM Safety Training Fail? [92.8748773632051]
"jailbreak" attacks on early releases of ChatGPT elicit undesired behavior.
We investigate why such attacks succeed and how they can be created.
New attacks utilizing our failure modes succeed on every prompt in a collection of unsafe requests.
arXiv Detail & Related papers (2023-07-05T17:58:10Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Visual Detection of Personal Protective Equipment and Safety Gear on
Industry Workers [49.36909714011171]
We develop a system that will improve workers' safety using a camera that will detect the usage of Personal Protective Equipment (PPE)
Our focus is to implement our system into an entry control point where workers must present themselves to obtain access to a restricted area.
A novelty of this work is that we increase the number of classes to five objects (hardhat, safety vest, safety gloves, safety glasses, and hearing protection)
arXiv Detail & Related papers (2022-12-09T11:50:03Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Developing Enterprise Cyber Situational Awareness [0.0]
The topic will focus on the U.S. Department of Defense strategy towards improving their network security defenses.
The approach will be analyzed to determine if DOD goals address any of their vulnerabilities towards protecting their networks.
arXiv Detail & Related papers (2020-09-03T18:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.