Evil Geniuses: Delving into the Safety of LLM-based Agents
- URL: http://arxiv.org/abs/2311.11855v2
- Date: Fri, 2 Feb 2024 08:28:01 GMT
- Title: Evil Geniuses: Delving into the Safety of LLM-based Agents
- Authors: Yu Tian, Xiao Yang, Jingyuan Zhang, Yinpeng Dong, Hang Su
- Abstract summary: Large language models (LLMs) have revitalized in large language models (LLMs)
This paper delves into the safety of LLM-based agents from three perspectives: agent quantity, role definition, and attack level.
- Score: 35.49857256840015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid advancements in large language models (LLMs) have revitalized in
LLM-based agents, exhibiting impressive human-like behaviors and cooperative
capabilities in various scenarios. However, these agents also bring some
exclusive risks, stemming from the complexity of interaction environments and
the usability of tools. This paper delves into the safety of LLM-based agents
from three perspectives: agent quantity, role definition, and attack level.
Specifically, we initially propose to employ a template-based attack strategy
on LLM-based agents to find the influence of agent quantity. In addition, to
address interaction environment and role specificity issues, we introduce Evil
Geniuses (EG), an effective attack method that autonomously generates prompts
related to the original role to examine the impact across various role
definitions and attack levels. EG leverages Red-Blue exercises, significantly
improving the generated prompt aggressiveness and similarity to original roles.
Our evaluations on CAMEL, Metagpt and ChatDev based on GPT-3.5 and GPT-4,
demonstrate high success rates. Extensive evaluation and discussion reveal that
these agents are less robust, prone to more harmful behaviors, and capable of
generating stealthier content than LLMs, highlighting significant safety
challenges and guiding future research. Our code is available at
https://github.com/T1aNS1R/Evil-Geniuses.
Related papers
- AgentPoison: Red-teaming LLM Agents via Poisoning Memory or Knowledge Bases [73.04652687616286]
We propose AgentPoison, the first backdoor attack targeting generic and RAG-based LLM agents by poisoning their long-term memory or RAG knowledge base.
Unlike conventional backdoor attacks, AgentPoison requires no additional model training or fine-tuning.
On each agent, AgentPoison achieves an average attack success rate higher than 80% with minimal impact on benign performance.
arXiv Detail & Related papers (2024-07-17T17:59:47Z) - Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities [28.244283407749265]
We investigate the security implications of large language models (LLMs) in multi-agent systems.
We propose a novel two-stage attack method involving Persuasiveness Injection and Manipulated Knowledge Injection.
We demonstrate that our attack method can successfully induce LLM-based agents to spread both counterfactual and toxic knowledge.
arXiv Detail & Related papers (2024-07-10T16:08:46Z) - Agent-Pro: Learning to Evolve via Policy-Level Reflection and Optimization [53.510942601223626]
Large Language Models (LLMs) exhibit robust problem-solving capabilities for diverse tasks.
These task solvers necessitate manually crafted prompts to inform task rules and regulate behaviors.
We propose Agent-Pro: an LLM-based Agent with Policy-level Reflection and Optimization.
arXiv Detail & Related papers (2024-02-27T15:09:20Z) - Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based
Agents [50.034049716274005]
We take the first step to investigate one of the typical safety threats, backdoor attack, to LLM-based agents.
We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis on the different forms of agent backdoor attacks.
We propose the corresponding data poisoning mechanisms to implement the above variations of agent backdoor attacks on two typical agent tasks.
arXiv Detail & Related papers (2024-02-17T06:48:45Z) - Attack Prompt Generation for Red Teaming and Defending Large Language
Models [70.157691818224]
Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content.
We propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts.
arXiv Detail & Related papers (2023-10-19T06:15:05Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.