Emergent Insight of the Cyber Security Management for Saudi Arabian
Universities: A Content Analysis
- URL: http://arxiv.org/abs/2110.04540v1
- Date: Sat, 9 Oct 2021 10:48:30 GMT
- Title: Emergent Insight of the Cyber Security Management for Saudi Arabian
Universities: A Content Analysis
- Authors: Masmali and Miah
- Abstract summary: The project is designed to assess the cybersecurity management and policies in Saudi Arabian universities.
The subsequent recommendations can be adopted to enhance the security of IT systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While cyber security has become a prominent concept of emerging information
governance, the Kingdom of Saudi Arabia has been dealing with severe threats to
individual and organizational IT systems for a long time. These risks have
recently permeated into educational institutions, thereby undermining the
confidentiality of information as well as the delivery of education. Recent
research has identified various causes and possible solutions to the problem.
However, most scholars have considered a reductionist approach, in which the
ability of computer configurations to prevent unwanted intrusions is evaluated
by breaking them down to their constituent parts. This method is inadequate at
studying complex adaptive systems. Therefore, the proposed project is designed
to utilize a holistic stance to assess the cybersecurity management and
policies in Saudi Arabian universities. Qualitative research, entailing a
thorough critical review of ten public universities, will be utilized to
investigate the subject matter. The subsequent recommendations can be adopted
to enhance the security of IT systems, not only in institutional settings but
also in any other environment in which such structures are used.
Related papers
- Llama-3.1-FoundationAI-SecurityLLM-Base-8B Technical Report [50.268821168513654]
We present Foundation-Sec-8B, a cybersecurity-focused large language model (LLMs) built on the Llama 3.1 architecture.
We evaluate it across both established and new cybersecurity benchmarks, showing that it matches Llama 3.1-70B and GPT-4o-mini in certain cybersecurity-specific tasks.
By releasing our model to the public, we aim to accelerate progress and adoption of AI-driven tools in both public and private cybersecurity contexts.
arXiv Detail & Related papers (2025-04-28T08:41:12Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Investigating the Security & Privacy Risks from Unsanctioned Technology Use by Educators [8.785737074008576]
This study aims to understand why instructors use unsanctioned applications, how instructors perceive the associated risks, and how it affects institutional security and privacy postures.
We designed and conducted an online survey-based study targeting instructors and administrators from K-12 and higher education institutions.
arXiv Detail & Related papers (2025-02-23T22:52:58Z) - Integrating Cybersecurity Frameworks into IT Security: A Comprehensive Analysis of Threat Mitigation Strategies and Adaptive Technologies [0.0]
The cybersecurity threat landscape is constantly actively making it imperative to develop sound frameworks to protect the IT structures.
This paper aims to discuss the application of cybersecurity frameworks into the IT security with focus placed on the role of such frameworks in addressing the changing nature of cybersecurity threats.
The discussion also singles out such technologies as Artificial Intelligence (AI) and Machine Learning (ML) as the core for real-time threat detection and response mechanisms.
arXiv Detail & Related papers (2025-02-02T03:38:48Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Exploring AI-Enabled Cybersecurity Frameworks: Deep-Learning Techniques, GPU Support, and Future Enhancements [0.4419843514606336]
Emerging cybersecurity systems are incorporating AI techniques, specifically deep-learning algorithms, to enhance their ability to detect incidents, analyze alerts, and respond to events.
While these techniques offer a promising approach to combating dynamic security threats, they often require significant computational resources.
We have identified a total of emphtwo deep-learning algorithms that are utilized by emphthree out of 38 selected cybersecurity frameworks.
arXiv Detail & Related papers (2024-12-17T08:14:12Z) - The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning [87.1610740406279]
White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons.
Current evaluations are private, preventing further research into mitigating risk.
We publicly release the Weapons of Mass Destruction Proxy benchmark, a dataset of 3,668 multiple-choice questions.
arXiv Detail & Related papers (2024-03-05T18:59:35Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Data Driven Approaches to Cybersecurity Governance for Board Decision-Making -- A Systematic Review [0.0]
This systematic literature review investigates the existing risk measurement instruments, cybersecurity metrics, and associated models for supporting BoDs.
The findings showed that, although sophisticated cybersecurity tools exist and are developing, there is limited information for Board of Directors to support them in terms of metrics and models to govern cybersecurity in a language they understand.
arXiv Detail & Related papers (2023-11-29T12:14:01Z) - The New Frontier of Cybersecurity: Emerging Threats and Innovations [0.0]
The research delves into the consequences of these threats on individuals, organizations, and society at large.
The sophistication and diversity of these emerging threats necessitate a multi-layered approach to cybersecurity.
This study emphasizes the importance of implementing effective measures to mitigate these threats.
arXiv Detail & Related papers (2023-11-05T12:08:20Z) - Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications [0.4665186371356556]
In July 2022, the Center for Security and Emerging Technology at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.
arXiv Detail & Related papers (2023-05-23T22:27:53Z) - New Challenges in Reinforcement Learning: A Survey of Security and
Privacy [26.706957408693363]
Reinforcement learning (RL) is one of the most important branches of AI.
RL has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics.
Some of these applications and systems have been shown to be vulnerable to security or privacy attacks.
arXiv Detail & Related papers (2022-12-31T12:30:43Z) - Ensemble learning techniques for intrusion detection system in the
context of cybersecurity [0.0]
Intrusion Detection System concept was used with the application of the Data Mining and Machine Learning Orange tool to obtain better results.
The main objective of the study was to investigate the Ensemble Learning technique using the Stacking method, supported by the Support Vector Machine (SVM) and kNearest Neighbour (kNN) algorithms.
arXiv Detail & Related papers (2022-12-21T10:50:54Z) - ThreatKG: An AI-Powered System for Automated Open-Source Cyber Threat Intelligence Gathering and Management [65.0114141380651]
ThreatKG is an automated system for OSCTI gathering and management.
It efficiently collects a large number of OSCTI reports from multiple sources.
It uses specialized AI-based techniques to extract high-quality knowledge about various threat entities.
arXiv Detail & Related papers (2022-12-20T16:13:59Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.