Optimally Blending Honeypots into Production Networks: Hardness and Algorithms
- URL: http://arxiv.org/abs/2401.06763v1
- Date: Fri, 12 Jan 2024 18:54:51 GMT
- Title: Optimally Blending Honeypots into Production Networks: Hardness and Algorithms
- Authors: Md Mahabub Uz Zaman, Liangde Tao, Mark Maldonado, Chang Liu, Ahmed Sunny, Shouhuai Xu, Lin Chen,
- Abstract summary: Honeypot is an important cyber defense technique that can expose attackers new attacks.
In this paper, we initiate a systematic study on characterizing the cybersecurity effectiveness of a new paradigm of deploying honeypots.
- Score: 11.847370655794608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Honeypot is an important cyber defense technique that can expose attackers new attacks. However, the effectiveness of honeypots has not been systematically investigated, beyond the rule of thumb that their effectiveness depends on how they are deployed. In this paper, we initiate a systematic study on characterizing the cybersecurity effectiveness of a new paradigm of deploying honeypots: blending honeypot computers (or IP addresses) into production computers. This leads to the following Honeypot Deployment (HD) problem, How should the defender blend honeypot computers into production computers to maximize the utility in forcing attackers to expose their new attacks while minimizing the loss to the defender in terms of the digital assets stored in the compromised production computers? We formalize HD as a combinatorial optimization problem, prove its NP hardness, provide a near optimal algorithm (i.e., polynomial time approximation scheme). We also conduct simulations to show the impact of attacker capabilities.
Related papers
- HoneyDOC: An Efficient Honeypot Architecture Enabling All-Round Design [0.5849783371898033]
Honeypots are designed to trap the attacker with the purpose of investigating its malicious behavior.
How to capture high-quality attack data has become a challenge in the context of honeypot area.
All-round honeypots, which mean significant improvement in sensibility, countermeasure and stealth, are necessary to tackle the problem.
arXiv Detail & Related papers (2024-02-09T16:27:45Z) - Efficient Defense Against Model Stealing Attacks on Convolutional Neural
Networks [0.548924822963045]
Model stealing attacks can lead to intellectual property theft and other security and privacy risks.
Current state-of-the-art defenses against model stealing attacks suggest adding perturbations to the prediction probabilities.
We propose a simple yet effective and efficient defense alternative.
arXiv Detail & Related papers (2023-09-04T22:25:49Z) - Baseline Defenses for Adversarial Attacks Against Aligned Language
Models [109.75753454188705]
Recent work shows that text moderations can produce jailbreaking prompts that bypass defenses.
We look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training.
We find that the weakness of existing discretes for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs.
arXiv Detail & Related papers (2023-09-01T17:59:44Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Honeypot Implementation in a Cloud Environment [0.0]
This thesis presents a honeypot solution to investigate malicious activities in heiCLOUD.
To detect attackers in restricted network zones at Heidelberg University, a new concept to discover leaks in the firewall will be created.
A customized OpenSSH server that works as an intermediary instance will be presented.
arXiv Detail & Related papers (2023-01-02T15:02:54Z) - HoneyModels: Machine Learning Honeypots [2.0499240875882]
We study an alternate approach inspired by honeypots to detect adversaries.
Our approach yields learned models with an embedded watermark.
We show that HoneyModels can reveal 69.5% of adversaries attempting to attack a Neural Network.
arXiv Detail & Related papers (2022-02-21T15:33:17Z) - HoneyCar: A Framework to Configure HoneypotVulnerabilities on the
Internet of Vehicles [5.248912296890883]
The Internet of Vehicles (IoV) has promising socio-economic benefits but also poses new cyber-physical threats.
Data on vehicular attackers can be realistically gathered through cyber threat intelligence using systems like honeypots.
We present HoneyCar, a novel decision support framework for honeypot deception.
arXiv Detail & Related papers (2021-11-03T17:31:56Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.