WIPI: A New Web Threat for LLM-Driven Web Agents
- URL: http://arxiv.org/abs/2402.16965v1
- Date: Mon, 26 Feb 2024 19:01:54 GMT
- Title: WIPI: A New Web Threat for LLM-Driven Web Agents
- Authors: Fangzhou Wu, Shutong Wu, Yulong Cao, Chaowei Xiao
- Abstract summary: We introduce a novel threat, WIPI, that indirectly controls Web Agent to execute malicious instructions embedded in publicly accessible webpages.
To launch a successful WIPI works in a black-box environment.
Our methodology achieves an average attack success rate (ASR) exceeding 90% even in pure black-box scenarios.
- Score: 28.651763099760664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the fast development of large language models (LLMs), LLM-driven Web
Agents (Web Agents for short) have obtained tons of attention due to their
superior capability where LLMs serve as the core part of making decisions like
the human brain equipped with multiple web tools to actively interact with
external deployed websites. As uncountable Web Agents have been released and
such LLM systems are experiencing rapid development and drawing closer to
widespread deployment in our daily lives, an essential and pressing question
arises: "Are these Web Agents secure?". In this paper, we introduce a novel
threat, WIPI, that indirectly controls Web Agent to execute malicious
instructions embedded in publicly accessible webpages. To launch a successful
WIPI works in a black-box environment. This methodology focuses on the form and
content of indirect instructions within external webpages, enhancing the
efficiency and stealthiness of the attack. To evaluate the effectiveness of the
proposed methodology, we conducted extensive experiments using 7 plugin-based
ChatGPT Web Agents, 8 Web GPTs, and 3 different open-source Web Agents. The
results reveal that our methodology achieves an average attack success rate
(ASR) exceeding 90% even in pure black-box scenarios. Moreover, through an
ablation study examining various user prefix instructions, we demonstrated that
the WIPI exhibits strong robustness, maintaining high performance across
diverse prefix instructions.
Related papers
- Auto-Intent: Automated Intent Discovery and Self-Exploration for Large Language Model Web Agents [68.22496852535937]
We introduce Auto-Intent, a method to adapt a pre-trained large language model (LLM) as an agent for a target domain without direct fine-tuning.
Our approach first discovers the underlying intents from target domain demonstrations unsupervisedly.
We train our intent predictor to predict the next intent given the agent's past observations and actions.
arXiv Detail & Related papers (2024-10-29T21:37:04Z) - AdvWeb: Controllable Black-box Attacks on VLM-powered Web Agents [22.682464365220916]
AdvWeb is a novel black-box attack framework designed against web agents.
We train and optimize the adversarial prompter model using DPO.
Unlike prior approaches, our adversarial string injection maintains stealth and control.
arXiv Detail & Related papers (2024-10-22T20:18:26Z) - Imprompter: Tricking LLM Agents into Improper Tool Use [35.255462653237885]
Large Language Model (LLM) Agents are an emerging computing paradigm that blends generative machine learning with tools such as code interpreters, web browsing, email, and more generally, external resources.
We contribute to the security foundations of agent-based systems and surface a new class of automatically computed obfuscated adversarial prompt attacks.
arXiv Detail & Related papers (2024-10-19T01:00:57Z) - AgentOccam: A Simple Yet Strong Baseline for LLM-Based Web Agents [52.13695464678006]
This study enhances an LLM-based web agent by simply refining its observation and action space.
AgentOccam surpasses the previous state-of-the-art and concurrent work by 9.8 (+29.4%) and 5.9 (+15.8%) absolute points respectively.
arXiv Detail & Related papers (2024-10-17T17:50:38Z) - Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems [6.480532634073257]
We introduce Prompt Infection, a novel attack where malicious prompts self-replicate across interconnected agents.
This attack poses severe threats, including data theft, scams, misinformation, and system-wide disruption.
To address this, we propose LLM Tagging, a defense mechanism that, when combined with existing safeguards, significantly mitigates infection spread.
arXiv Detail & Related papers (2024-10-09T11:01:29Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks? [83.19032025950986]
We study the use of large language model-based agents for interacting with software via web browsers.
WorkArena is a benchmark of 33 tasks based on the widely-used ServiceNow platform.
BrowserGym is an environment for the design and evaluation of such agents.
arXiv Detail & Related papers (2024-03-12T14:58:45Z) - WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models [65.18602126334716]
Existing web agents typically only handle one input modality and are evaluated only in simplified web simulators or static web snapshots.
We introduce WebVoyager, an innovative Large Multimodal Model (LMM) powered web agent that can complete user instructions end-to-end by interacting with real-world websites.
We show that WebVoyager achieves a 59.1% task success rate on our benchmark, significantly surpassing the performance of both GPT-4 (All Tools) and the WebVoyager (text-only) setups.
arXiv Detail & Related papers (2024-01-25T03:33:18Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.