Leveraging AI Planning For Detecting Cloud Security Vulnerabilities
- URL: http://arxiv.org/abs/2402.10985v1
- Date: Fri, 16 Feb 2024 03:28:02 GMT
- Title: Leveraging AI Planning For Detecting Cloud Security Vulnerabilities
- Authors: Mikhail Kazdagli, Mohit Tiwari, Akshat Kumar
- Abstract summary: Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration.
Access control misconfigurations are often the primary driver for cloud attacks.
We develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware.
- Score: 17.424669782627497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cloud computing services provide scalable and cost-effective solutions for
data storage, processing, and collaboration. Alongside their growing
popularity, concerns related to their security vulnerabilities leading to data
breaches and sophisticated attacks such as ransomware are growing. To address
these, first, we propose a generic framework to express relations between
different cloud objects such as users, datastores, security roles, to model
access control policies in cloud systems. Access control misconfigurations are
often the primary driver for cloud attacks. Second, we develop a PDDL model for
detecting security vulnerabilities which can for example lead to widespread
attacks such as ransomware, sensitive data exfiltration among others. A planner
can then generate attacks to identify such vulnerabilities in the cloud.
Finally, we test our approach on 14 real Amazon AWS cloud configurations of
different commercial organizations. Our system can identify a broad range of
security vulnerabilities, which state-of-the-art industry tools cannot detect.
Related papers
- How to integrate cloud service, data analytic and machine learning technique to reduce cyber risks associated with the modern cloud based infrastructure [0.0]
Combination of cloud technology, machine learning, and data visualization techniques allows hybrid enterprise networks to hold massive volumes of data.
Traditional security technologies are unable to cope with the rapid data explosion in cloud platforms.
Machine learning powered security solutions and data visualization techniques are playing instrumental roles in detecting security threat, data breaches, and automatic finding software vulnerabilities.
arXiv Detail & Related papers (2024-05-19T16:10:03Z) - CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction [4.481857838188627]
We propose CloudFort, a novel defense mechanism designed to enhance the robustness of 3D point cloud classifiers against backdoor attacks.
Our results show that CloudFort significantly enhances the security of 3D point cloud classification models without compromising their accuracy on benign samples.
arXiv Detail & Related papers (2024-04-22T09:55:50Z) - Emergent (In)Security of Multi-Cloud Environments [3.3819025097691537]
A majority of IT organizations have workloads spread across different cloud service providers, growing their multi-cloud environments.
The increase in the number of attack vectors creates a challenge of how to prioritize mitigations and countermeasures.
We conducted an analysis of multi-cloud threat vectors enabling calculation and prioritization for the identified mitigations and countermeasures.
arXiv Detail & Related papers (2023-11-02T14:02:33Z) - Security Challenges for Cloud or Fog Computing-Based AI Applications [0.0]
Securing the underlying Cloud or Fog services is essential.
Because the requirements for AI applications can also be different, we differentiate according to whether they are used in the Cloud or in a Fog Computing network.
We conclude by outlining specific information security requirements for AI applications.
arXiv Detail & Related papers (2023-10-30T11:32:50Z) - Exploring Security Practices in Infrastructure as Code: An Empirical
Study [54.669404064111795]
Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools.
scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks.
Ensuring security relies on practitioners understanding and the adoption of explicit policies, guidelines, or best practices.
arXiv Detail & Related papers (2023-08-07T23:43:32Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Analyzing Machine Learning Approaches for Online Malware Detection in
Cloud [0.0]
We present online malware detection based on process level performance metrics and analyze the effectiveness of different machine learning models.
Our analysis conclude that neural network models can most accurately detect the malware that have on the process level features of virtual machines in the cloud.
arXiv Detail & Related papers (2021-05-19T17:28:12Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.