HoneyGPT: Breaking the Trilemma in Terminal Honeypots with Large Language Model
- URL: http://arxiv.org/abs/2406.01882v1
- Date: Tue, 4 Jun 2024 01:31:20 GMT
- Title: HoneyGPT: Breaking the Trilemma in Terminal Honeypots with Large Language Model
- Authors: Ziyang Wang, Jianzhou You, Haining Wang, Tianwei Yuan, Shichao Lv, Yang Wang, Limin Sun,
- Abstract summary: Honeypots struggle with balancing flexibility, interaction depth, and deceptive capability.
We introduce HoneyGPT, a pioneering honeypot architecture based on ChatGPT.
We present a structured prompt engineering framework that augments long-term interaction memory and robust security analytics.
- Score: 18.51393825471691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Honeypots, as a strategic cyber-deception mechanism designed to emulate authentic interactions and bait unauthorized entities, continue to struggle with balancing flexibility, interaction depth, and deceptive capability despite their evolution over decades. Often they also lack the capability of proactively adapting to an attacker's evolving tactics, which restricts the depth of engagement and subsequent information gathering. Under this context, the emergent capabilities of large language models, in tandem with pioneering prompt-based engineering techniques, offer a transformative shift in the design and deployment of honeypot technologies. In this paper, we introduce HoneyGPT, a pioneering honeypot architecture based on ChatGPT, heralding a new era of intelligent honeypot solutions characterized by their cost-effectiveness, high adaptability, and enhanced interactivity, coupled with a predisposition for proactive attacker engagement. Furthermore, we present a structured prompt engineering framework that augments long-term interaction memory and robust security analytics. This framework, integrating thought of chain tactics attuned to honeypot contexts, enhances interactivity and deception, deepens security analytics, and ensures sustained engagement. The evaluation of HoneyGPT includes two parts: a baseline comparison based on a collected dataset and a field evaluation in real scenarios for four weeks. The baseline comparison demonstrates HoneyGPT's remarkable ability to strike a balance among flexibility, interaction depth, and deceptive capability. The field evaluation further validates HoneyGPT's efficacy, showing its marked superiority in enticing attackers into more profound interactive engagements and capturing a wider array of novel attack vectors in comparison to existing honeypot technologies.
Related papers
- RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - HoneyDOC: An Efficient Honeypot Architecture Enabling All-Round Design [0.5849783371898033]
Honeypots are designed to trap the attacker with the purpose of investigating its malicious behavior.
How to capture high-quality attack data has become a challenge in the context of honeypot area.
All-round honeypots, which mean significant improvement in sensibility, countermeasure and stealth, are necessary to tackle the problem.
arXiv Detail & Related papers (2024-02-09T16:27:45Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - What are Attackers after on IoT Devices? An approach based on a
multi-phased multi-faceted IoT honeypot ecosystem and data clustering [11.672070081489565]
Honeypots have been historically used as decoy devices to help researchers gain a better understanding of the dynamic of threats on a network.
In this work, we presented a new approach to creating a multi-phased, multi-faceted honeypot ecosystem.
We were able to collect increasingly sophisticated attack data in each phase.
arXiv Detail & Related papers (2021-12-21T04:11:45Z) - HoneyCar: A Framework to Configure HoneypotVulnerabilities on the
Internet of Vehicles [5.248912296890883]
The Internet of Vehicles (IoV) has promising socio-economic benefits but also poses new cyber-physical threats.
Data on vehicular attackers can be realistically gathered through cyber threat intelligence using systems like honeypots.
We present HoneyCar, a novel decision support framework for honeypot deception.
arXiv Detail & Related papers (2021-11-03T17:31:56Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - APS: Active Pretraining with Successor Features [96.24533716878055]
We show that by reinterpreting and combining successorcitepHansenFast with non entropy, the intractable mutual information can be efficiently optimized.
The proposed method Active Pretraining with Successor Feature (APS) explores the environment via non entropy, and the explored data can be efficiently leveraged to learn behavior.
arXiv Detail & Related papers (2021-08-31T16:30:35Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension [96.62963688510035]
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation.
We present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation.
arXiv Detail & Related papers (2020-04-13T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.