RatGPT: Turning online LLMs into Proxies for Malware Attacks
- URL: http://arxiv.org/abs/2308.09183v2
- Date: Thu, 7 Sep 2023 06:41:21 GMT
- Title: RatGPT: Turning online LLMs into Proxies for Malware Attacks
- Authors: Mika Beckerich, Laura Plein, Sergio Coronado
- Abstract summary: We present a proof-of-concept where ChatGPT is used for the dissemination of malicious software while evading detection.
We also present the general approach as well as essential elements in order to stay undetected and make the attack a success.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The evolution of Generative AI and the capabilities of the newly released
Large Language Models (LLMs) open new opportunities in software engineering.
However, they also lead to new challenges in cybersecurity. Recently,
researchers have shown the possibilities of using LLMs such as ChatGPT to
generate malicious content that can directly be exploited or guide
inexperienced hackers to weaponize tools and code. These studies covered
scenarios that still require the attacker to be in the middle of the loop. In
this study, we leverage openly available plugins and use an LLM as proxy
between the attacker and the victim. We deliver a proof-of-concept where
ChatGPT is used for the dissemination of malicious software while evading
detection, alongside establishing the communication to a command and control
(C2) server to receive commands to interact with a victim's system. Finally, we
present the general approach as well as essential elements in order to stay
undetected and make the attack a success. This proof-of-concept highlights
significant cybersecurity issues with openly available plugins and LLMs, which
require the development of security guidelines, controls, and mitigation
strategies.
Related papers
- Mitigating Backdoor Threats to Large Language Models: Advancement and Challenges [46.032173498399885]
Large Language Models (LLMs) have significantly impacted various domains, including Web search, healthcare, and software development.
As these models scale, they become more vulnerable to cybersecurity risks, particularly backdoor attacks.
arXiv Detail & Related papers (2024-09-30T06:31:36Z) - Compromising Embodied Agents with Contextual Backdoor Attacks [69.71630408822767]
Large language models (LLMs) have transformed the development of embodied intelligence.
This paper uncovers a significant backdoor security threat within this process.
By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a black-box LLM.
arXiv Detail & Related papers (2024-08-06T01:20:12Z) - Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context [49.13497493053742]
This research explores converting a nonsensical suffix attack into a sensible prompt via a situation-driven contextual re-writing.
We combine an independent, meaningful adversarial insertion and situations derived from movies to check if this can trick an LLM.
Our approach demonstrates that a successful situation-driven attack can be executed on both open-source and proprietary LLMs.
arXiv Detail & Related papers (2024-07-19T19:47:26Z) - Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - Transforming Computer Security and Public Trust Through the Exploration of Fine-Tuning Large Language Models [0.0]
"Mallas" are malicious services that exploit large language models (LLMs) for nefarious purposes.
This paper delves into the proliferation of Mallas by examining the use of various pre-trained language models and their efficiency and vulnerabilities.
arXiv Detail & Related papers (2024-06-02T06:10:31Z) - A Comprehensive Overview of Large Language Models (LLMs) for Cyber Defences: Opportunities and Directions [12.044950530380563]
The recent progression of Large Language Models (LLMs) has witnessed great success in the fields of data-centric applications.
We provide an overview for the recent activities of LLMs in cyber defence sections.
Fundamental concepts of the progression of LLMs from Transformers, Pre-trained Transformers, and GPT is presented.
arXiv Detail & Related papers (2024-05-23T12:19:07Z) - Generative AI and Large Language Models for Cyber Security: All Insights You Need [0.06597195879147556]
This paper provides a comprehensive review of the future of cybersecurity through Generative AI and Large Language Models (LLMs)
We explore LLM applications across various domains, including hardware design security, intrusion detection, software engineering, design verification, cyber threat intelligence, malware detection, and phishing detection.
We present an overview of LLM evolution and its current state, focusing on advancements in models such as GPT-4, GPT-3.5, Mixtral-8x7B, BERT, Falcon2, and LLaMA.
arXiv Detail & Related papers (2024-05-21T13:02:27Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard
Security Attacks [67.86285142381644]
Recent advances in instruction-following large language models amplify the dual-use risks for malicious purposes.
Dual-use is difficult to prevent as instruction-following capabilities now enable standard attacks from computer security.
We show that instruction-following LLMs can produce targeted malicious content, including hate speech and scams.
arXiv Detail & Related papers (2023-02-11T15:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.