Resource-aware Cyber Deception for Microservice-based Applications
- URL: http://arxiv.org/abs/2303.03151v5
- Date: Mon, 6 May 2024 15:50:43 GMT
- Title: Resource-aware Cyber Deception for Microservice-based Applications
- Authors: Marco Zambianco, Claudio Facchinetti, Roberto Doriguzzi-Corin, Domenico Siracusa,
- Abstract summary: Cyber deception can be a valuable addition to traditional cyber defense mechanisms.
Pre-built decoys are not effective in detecting and mitigating actors in classical computer networks.
decoy cloning can offer a high-fidelity deception mechanism to intercept ongoing attacks within production environments.
- Score: 0.8999666725996975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber deception can be a valuable addition to traditional cyber defense mechanisms, especially for modern cloud-native environments with a fading security perimeter. However, pre-built decoys used in classical computer networks are not effective in detecting and mitigating malicious actors due to their inability to blend with the variety of applications in such environments. On the other hand, decoys cloning the deployed microservices of an application can offer a high-fidelity deception mechanism to intercept ongoing attacks within production environments. However, to fully benefit from this approach, it is essential to use a limited amount of decoy resources and devise a suitable cloning strategy to minimize the impact on legitimate services performance. Following this observation, we formulate a non-linear integer optimization problem that maximizes the number of attack paths intercepted by the allocated decoys within a fixed resource budget. Attack paths represent the attacker's movements within the infrastructure as a sequence of violated microservices. We also design a heuristic decoy placement algorithm to approximate the optimal solution and overcome the computational complexity of the proposed formulation. We evaluate the performance of the optimal and heuristic solutions against other schemes that use local vulnerability metrics to select which microservices to clone as decoys. Our results show that the proposed allocation strategy achieves a higher number of intercepted attack paths compared to these schemes while requiring approximately the same number of decoys.
Related papers
- EdgeShield: A Universal and Efficient Edge Computing Framework for Robust AI [8.688432179052441]
We propose an edge framework design to enable universal and efficient detection of adversarial attacks.
This framework incorporates an attention-based adversarial detection methodology and a lightweight detection network formation.
The results indicate an impressive 97.43% F-score can be achieved, demonstrating the framework's proficiency in detecting adversarial attacks.
arXiv Detail & Related papers (2024-08-08T02:57:55Z) - A Proactive Decoy Selection Scheme for Cyber Deception using MITRE ATT&CK [0.9831489366502301]
Cyber deception allows compensating the late response of defenders to the ever evolving tactics, techniques, and procedures (TTPs) of attackers.
In this work, we design a decoy selection scheme that is supported by an adversarial modeling based on empirical observation of real-world attackers.
Results reveal that the proposed scheme provides the highest interception rate of attack paths using the lowest amount of decoys.
arXiv Detail & Related papers (2024-04-19T10:45:05Z) - Discriminative Adversarial Unlearning [40.30974185546541]
We introduce a novel machine unlearning framework founded upon the established principles of the min-max optimization paradigm.
We capitalize on the capabilities of strong Membership Inference Attacks (MIA) to facilitate the unlearning of specific samples from a trained model.
Our proposed algorithm closely approximates the ideal benchmark of retraining from scratch for both random sample forgetting and class-wise forgetting schemes.
arXiv Detail & Related papers (2024-02-10T03:04:57Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Moving Target Defense based Secured Network Slicing System in the O-RAN Architecture [12.360792257414458]
Artificial intelligence (AI) and machine learning (ML) security threats can even threaten open radio access network (O-RAN) benefits.
This paper proposes a novel approach to estimating the optimal number of predefined VNFs for each slice.
We also address secure AI/ML methods for dynamic service admission control and power minimization in the O-RAN architecture.
arXiv Detail & Related papers (2023-09-23T18:21:33Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.