Detection of Compromised Functions in a Serverless Cloud Environment
- URL: http://arxiv.org/abs/2408.02641v1
- Date: Mon, 5 Aug 2024 17:14:35 GMT
- Title: Detection of Compromised Functions in a Serverless Cloud Environment
- Authors: Danielle Lavi, Oleg Brodt, Dudu Mimran, Yuval Elovici, Asaf Shabtai,
- Abstract summary: Serverless computing is an emerging cloud paradigm with serverless functions at its core.
Existing security solutions do not apply to all serverless architectures.
We present an extendable serverless security threat detection model.
- Score: 24.312198733476063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Serverless computing is an emerging cloud paradigm with serverless functions at its core. While serverless environments enable software developers to focus on developing applications without the need to actively manage the underlying runtime infrastructure, they open the door to a wide variety of security threats that can be challenging to mitigate with existing methods. Existing security solutions do not apply to all serverless architectures, since they require significant modifications to the serverless infrastructure or rely on third-party services for the collection of more detailed data. In this paper, we present an extendable serverless security threat detection model that leverages cloud providers' native monitoring tools to detect anomalous behavior in serverless applications. Our model aims to detect compromised serverless functions by identifying post-exploitation abnormal behavior related to different types of attacks on serverless functions, and therefore, it is a last line of defense. Our approach is not tied to any specific serverless application, is agnostic to the type of threats, and is adaptable through model adjustments. To evaluate our model's performance, we developed a serverless cybersecurity testbed in an AWS cloud environment, which includes two different serverless applications and simulates a variety of attack scenarios that cover the main security threats faced by serverless functions. Our evaluation demonstrates our model's ability to detect all implemented attacks while maintaining a negligible false alarm rate.
Related papers
- MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - SETC: A Vulnerability Telemetry Collection Framework [0.0]
This paper introduces the Security Exploit Telemetry Collection (SETC) framework.
SETC generates reproducible vulnerability exploit data at scale for robust defensive security research.
This research enables scalable exploit data generation to drive innovations in threat modeling, detection methods, analysis techniques, and strategies.
arXiv Detail & Related papers (2024-06-10T00:13:35Z) - Penetration Testing of 5G Core Network Web Technologies [53.89039878885825]
We present the first security assessment of the 5G core from a web security perspective.
We use the STRIDE threat modeling approach to define a complete list of possible threat vectors and associated attacks.
Our analysis shows that all these cores are vulnerable to at least two of our identified attack vectors.
arXiv Detail & Related papers (2024-03-04T09:27:11Z) - Leveraging AI Planning For Detecting Cloud Security Vulnerabilities [15.503757553097387]
Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration.
Access control misconfigurations are often the primary driver for cloud attacks.
We develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware.
arXiv Detail & Related papers (2024-02-16T03:28:02Z) - A Study on the Security Requirements Analysis to build a Zero Trust-based Remote Work Environment [2.1961544533969257]
This paper proposes detailed security requirements based on the Zero Trust model and conducts security analyses of various cloud services accordingly.
As a result of the security analysis, we proposed potential threats and countermeasures for cloud services with Zero Trust.
arXiv Detail & Related papers (2024-01-08T05:50:20Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - Preserving Privacy and Security in Federated Learning [21.241705771577116]
We develop a principle framework that offers both privacy guarantees for users and detection against poisoning attacks from them.
Our framework enables the central server to identify poisoned model updates without violating the privacy guarantees of secure aggregation.
arXiv Detail & Related papers (2022-02-07T18:40:38Z) - Analyzing Machine Learning Approaches for Online Malware Detection in
Cloud [0.0]
We present online malware detection based on process level performance metrics and analyze the effectiveness of different machine learning models.
Our analysis conclude that neural network models can most accurately detect the malware that have on the process level features of virtual machines in the cloud.
arXiv Detail & Related papers (2021-05-19T17:28:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.