A Survey of Unikernel Security: Insights and Trends from a Quantitative Analysis
- URL: http://arxiv.org/abs/2406.01872v1
- Date: Tue, 4 Jun 2024 00:51:12 GMT
- Title: A Survey of Unikernel Security: Insights and Trends from a Quantitative Analysis
- Authors: Alex Wollman, John Hastings,
- Abstract summary: This research presents a quantitative methodology using TF-IDF to analyze the focus of security discussions within unikernel research literature.
Memory Protection Extensions and Data Execution Prevention were the least frequently occurring topics, while SGX was the most frequent topic.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unikernels, an evolution of LibOSs, are emerging as a virtualization technology to rival those currently used by cloud providers. Unikernels combine the user and kernel space into one "uni"fied memory space and omit functionality that is not necessary for its application to run, thus drastically reducing the required resources. The removed functionality however is far-reaching and includes components that have become common security technologies such as Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), and Non-executable bits (NX bits). This raises questions about the real-world security of unikernels. This research presents a quantitative methodology using TF-IDF to analyze the focus of security discussions within unikernel research literature. Based on a corpus of 33 unikernel-related papers spanning 2013-2023, our analysis found that Memory Protection Extensions and Data Execution Prevention were the least frequently occurring topics, while SGX was the most frequent topic. The findings quantify priorities and assumptions in unikernel security research, bringing to light potential risks from underexplored attack surfaces. The quantitative approach is broadly applicable for revealing trends and gaps in niche security domains.
Related papers
- PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution [8.92716309877259]
Federated Learning (FL) and Local Differential Privacy (LDP) have attracted much attention over the past few years.
They share the common limitation of being vulnerable to poisoning attacks.
We propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution.
arXiv Detail & Related papers (2024-04-10T04:18:26Z) - Topological safeguard for evasion attack interpreting the neural
networks' behavior [0.0]
In this work, a novel detector of evasion attacks is developed.
It focuses on the information of the activations of the neurons given by the model when an input sample is injected.
For this purpose, a huge data preprocessing is required to introduce all this information in the detector.
arXiv Detail & Related papers (2024-02-12T08:39:40Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - What Can Self-Admitted Technical Debt Tell Us About Security? A
Mixed-Methods Study [6.286506087629511]
Self-Admitted Technical Debt (SATD)
can be deemed as dreadful sources of information on potentially exploitable vulnerabilities and security flaws.
This work investigates the security implications of SATD from a technical and developer-centred perspective.
arXiv Detail & Related papers (2024-01-23T13:48:49Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Towards a Privacy-preserving Deep Learning-based Network Intrusion
Detection in Data Distribution Services [0.0]
Data Distribution Service (DDS) is an innovative approach towards communication in ICS/IoT infrastructure and robotics.
Traditional intrusion detection systems (IDS) do not detect any anomalies in the publish/subscribe method.
This report presents an experimental work on simulation and application of Deep Learning for their detection.
arXiv Detail & Related papers (2021-06-12T12:53:38Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Data Mining with Big Data in Intrusion Detection Systems: A Systematic
Literature Review [68.15472610671748]
Cloud computing has become a powerful and indispensable technology for complex, high performance and scalable computation.
The rapid rate and volume of data creation has begun to pose significant challenges for data management and security.
The design and deployment of intrusion detection systems (IDS) in the big data setting has, therefore, become a topic of importance.
arXiv Detail & Related papers (2020-05-23T20:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.