Leveraging AI Planning For Detecting Cloud Security Vulnerabilities
- URL: http://arxiv.org/abs/2402.10985v2
- Date: Fri, 26 Jul 2024 01:37:38 GMT
- Title: Leveraging AI Planning For Detecting Cloud Security Vulnerabilities
- Authors: Mikhail Kazdagli, Mohit Tiwari, Akshat Kumar,
- Abstract summary: Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration.
Access control misconfigurations are often the primary driver for cloud attacks.
We develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware.
- Score: 15.503757553097387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cloud computing services provide scalable and cost-effective solutions for data storage, processing, and collaboration. Alongside their growing popularity, concerns related to their security vulnerabilities leading to data breaches and sophisticated attacks such as ransomware are growing. To address these, first, we propose a generic framework to express relations between different cloud objects such as users, datastores, security roles, to model access control policies in cloud systems. Access control misconfigurations are often the primary driver for cloud attacks. Second, we develop a PDDL model for detecting security vulnerabilities which can for example lead to widespread attacks such as ransomware, sensitive data exfiltration among others. A planner can then generate attacks to identify such vulnerabilities in the cloud. Finally, we test our approach on 14 real Amazon AWS cloud configurations of different commercial organizations. Our system can identify a broad range of security vulnerabilities, which state-of-the-art industry tools cannot detect.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Detection of Compromised Functions in a Serverless Cloud Environment [24.312198733476063]
Serverless computing is an emerging cloud paradigm with serverless functions at its core.
Existing security solutions do not apply to all serverless architectures.
We present an extendable serverless security threat detection model.
arXiv Detail & Related papers (2024-08-05T17:14:35Z) - How to integrate cloud service, data analytic and machine learning technique to reduce cyber risks associated with the modern cloud based infrastructure [0.0]
Combination of cloud technology, machine learning, and data visualization techniques allows hybrid enterprise networks to hold massive volumes of data.
Traditional security technologies are unable to cope with the rapid data explosion in cloud platforms.
Machine learning powered security solutions and data visualization techniques are playing instrumental roles in detecting security threat, data breaches, and automatic finding software vulnerabilities.
arXiv Detail & Related papers (2024-05-19T16:10:03Z) - CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction [4.481857838188627]
We propose CloudFort, a novel defense mechanism designed to enhance the robustness of 3D point cloud classifiers against backdoor attacks.
Our results show that CloudFort significantly enhances the security of 3D point cloud classification models without compromising their accuracy on benign samples.
arXiv Detail & Related papers (2024-04-22T09:55:50Z) - Emergent (In)Security of Multi-Cloud Environments [3.3819025097691537]
A majority of IT organizations have workloads spread across different cloud service providers, growing their multi-cloud environments.
The increase in the number of attack vectors creates a challenge of how to prioritize mitigations and countermeasures.
We conducted an analysis of multi-cloud threat vectors enabling calculation and prioritization for the identified mitigations and countermeasures.
arXiv Detail & Related papers (2023-11-02T14:02:33Z) - Security Challenges for Cloud or Fog Computing-Based AI Applications [0.0]
Securing the underlying Cloud or Fog services is essential.
Because the requirements for AI applications can also be different, we differentiate according to whether they are used in the Cloud or in a Fog Computing network.
We conclude by outlining specific information security requirements for AI applications.
arXiv Detail & Related papers (2023-10-30T11:32:50Z) - Exploring Security Practices in Infrastructure as Code: An Empirical
Study [54.669404064111795]
Cloud computing has become popular thanks to the widespread use of Infrastructure as Code (IaC) tools.
scripting process does not automatically prevent practitioners from introducing misconfigurations, vulnerabilities, or privacy risks.
Ensuring security relies on practitioners understanding and the adoption of explicit policies, guidelines, or best practices.
arXiv Detail & Related papers (2023-08-07T23:43:32Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.