Chatbots in a Botnet World
- URL: http://arxiv.org/abs/2212.11126v2
- Date: Thu, 22 Dec 2022 17:20:36 GMT
- Title: Chatbots in a Botnet World
- Authors: Forrest McKee, David Noever
- Abstract summary: The research demonstrates thirteen coding tasks that generally qualify as stages in the MITRE ATT&CK framework.
The experimental prompts generate examples of keyloggers, logic bombs, obfuscated worms, and payment-fulfilled ransomware.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Question-and-answer formats provide a novel experimental platform for
investigating cybersecurity questions. Unlike previous chatbots, the latest
ChatGPT model from OpenAI supports an advanced understanding of complex coding
questions. The research demonstrates thirteen coding tasks that generally
qualify as stages in the MITRE ATT&CK framework, ranging from credential access
to defense evasion. With varying success, the experimental prompts generate
examples of keyloggers, logic bombs, obfuscated worms, and payment-fulfilled
ransomware. The empirical results illustrate cases that support the broad gain
of functionality, including self-replication and self-modification, evasion,
and strategic understanding of complex cybersecurity goals. One surprising
feature of ChatGPT as a language-only model centers on its ability to spawn
coding approaches that yield images that obfuscate or embed executable
programming steps or links.
Related papers
- Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders [0.0]
Research builds and evaluates the adversarial potential to introduce copied code or hallucinated AI recommendations for malicious code in popular code repositories.
foundational large language models (LLMs) from OpenAI, Google, and Anthropic guard against both harmful behaviors and toxic strings.
We compare this attack to previous work on context-shifting and contrast the attack surface as a novel version of "living off the land" attacks in the malware literature.
arXiv Detail & Related papers (2024-10-09T01:36:25Z) - CodeChameleon: Personalized Encryption Framework for Jailbreaking Large
Language Models [49.60006012946767]
We propose CodeChameleon, a novel jailbreak framework based on personalized encryption tactics.
We conduct extensive experiments on 7 Large Language Models, achieving state-of-the-art average Attack Success Rate (ASR)
Remarkably, our method achieves an 86.6% ASR on GPT-4-1106.
arXiv Detail & Related papers (2024-02-26T16:35:59Z) - A Survey of Adversarial CAPTCHAs on its History, Classification and
Generation [69.36242543069123]
We extend the definition of adversarial CAPTCHAs and propose a classification method for adversarial CAPTCHAs.
Also, we analyze some defense methods that can be used to defend adversarial CAPTCHAs, indicating potential threats to adversarial CAPTCHAs.
arXiv Detail & Related papers (2023-11-22T08:44:58Z) - Exploring ChatGPT's Capabilities on Vulnerability Management [56.4403395100589]
We explore ChatGPT's capabilities on 6 tasks involving the complete vulnerability management process with a large-scale dataset containing 70,346 samples.
One notable example is ChatGPT's proficiency in tasks like generating titles for software bug reports.
Our findings reveal the difficulties encountered by ChatGPT and shed light on promising future directions.
arXiv Detail & Related papers (2023-11-11T11:01:13Z) - Prompt-Enhanced Software Vulnerability Detection Using ChatGPT [9.35868869848051]
Large language models (LLMs) like GPT have received considerable attention due to their stunning intelligence.
This paper launches a study on the performance of software vulnerability detection using ChatGPT with different prompt designs.
arXiv Detail & Related papers (2023-08-24T10:30:33Z) - Unmasking the giant: A comprehensive evaluation of ChatGPT's proficiency in coding algorithms and data structures [0.6990493129893112]
We evaluate ChatGPT's ability to generate correct solutions to the problems fed to it, its code quality, and nature of run-time errors thrown by its code.
We look into patterns in the test cases passed in order to gain some insights into how wrong ChatGPT code is in these kinds of situations.
arXiv Detail & Related papers (2023-07-10T08:20:34Z) - Chatbots to ChatGPT in a Cybersecurity Space: Evolution,
Vulnerabilities, Attacks, Challenges, and Future Recommendations [6.1194122931444035]
OpenAI developed ChatGPT blizzard on the Internet as it crossed one million users within five days of its launch.
With the enhanced popularity, ChatGPT experienced cybersecurity threats and vulnerabilities.
arXiv Detail & Related papers (2023-05-29T12:26:44Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - Revocable Cryptography from Learning with Errors [61.470151825577034]
We build on the no-cloning principle of quantum mechanics and design cryptographic schemes with key-revocation capabilities.
We consider schemes where secret keys are represented as quantum states with the guarantee that, once the secret key is successfully revoked from a user, they no longer have the ability to perform the same functionality as before.
arXiv Detail & Related papers (2023-02-28T18:58:11Z) - Robust Text CAPTCHAs Using Adversarial Examples [129.29523847765952]
We propose a user-friendly text-based CAPTCHA generation method named Robust Text CAPTCHA (RTC)
At the first stage, the foregrounds and backgrounds are constructed with randomly sampled font and background images.
At the second stage, we apply a highly transferable adversarial attack for text CAPTCHAs to better obstruct CAPTCHA solvers.
arXiv Detail & Related papers (2021-01-07T11:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.