Identifying and Mitigating Vulnerabilities in LLM-Integrated
Applications
- URL: http://arxiv.org/abs/2311.16153v2
- Date: Wed, 29 Nov 2023 03:43:03 GMT
- Title: Identifying and Mitigating Vulnerabilities in LLM-Integrated
Applications
- Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo
Li, Radha Poovendran
- Abstract summary: Large language models (LLMs) are increasingly deployed as the service backend for LLM-integrated applications.
In this work, we consider a setup where the user and LLM interact via an LLM-integrated application in the middle.
We identify potential vulnerabilities that can originate from the malicious application developer or from an outsider threat.
We develop a lightweight, threat-agnostic defense that mitigates both insider and outsider threats.
- Score: 37.316238236750415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are increasingly deployed as the service backend
for LLM-integrated applications such as code completion and AI-powered search.
LLM-integrated applications serve as middleware to refine users' queries with
domain-specific knowledge to better inform LLMs and enhance the responses.
Despite numerous opportunities and benefits, LLM-integrated applications also
introduce new attack surfaces. Understanding, minimizing, and eliminating these
emerging attack surfaces is a new area of research. In this work, we consider a
setup where the user and LLM interact via an LLM-integrated application in the
middle. We focus on the communication rounds that begin with user's queries and
end with LLM-integrated application returning responses to the queries, powered
by LLMs at the service backend. For this query-response protocol, we identify
potential vulnerabilities that can originate from the malicious application
developer or from an outsider threat initiator that is able to control the
database access, manipulate and poison data that are high-risk for the user.
Successful exploits of the identified vulnerabilities result in the users
receiving responses tailored to the intent of a threat initiator. We assess
such threats against LLM-integrated applications empowered by OpenAI GPT-3.5
and GPT-4. Our empirical results show that the threats can effectively bypass
the restrictions and moderation policies of OpenAI, resulting in users
receiving responses that contain bias, toxic content, privacy risk, and
disinformation. To mitigate those threats, we identify and define four key
properties, namely integrity, source identification, attack detectability, and
utility preservation, that need to be satisfied by a safe LLM-integrated
application. Based on these properties, we develop a lightweight,
threat-agnostic defense that mitigates both insider and outsider threats.
Related papers
- Purple-teaming LLMs with Adversarial Defender Training [57.535241000787416]
We present Purple-teaming LLMs with Adversarial Defender training (PAD)
PAD is a pipeline designed to safeguard LLMs by novelly incorporating the red-teaming (attack) and blue-teaming (safety training) techniques.
PAD significantly outperforms existing baselines in both finding effective attacks and establishing a robust safe guardrail.
arXiv Detail & Related papers (2024-07-01T23:25:30Z) - Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications [10.06789804722156]
We reveal a new threat to LLM-powered applications, termed retrieval poisoning, where attackers can guide the application to yield malicious responses during the RAG process.
Our preliminary experiments indicate that attackers can mislead LLMs with an 88.33% success rate, and achieve a 66.67% success rate in the real-world application.
arXiv Detail & Related papers (2024-04-26T07:11:18Z) - CyberSecEval 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models [6.931433424951554]
Large language models (LLMs) introduce new security risks, but there are few comprehensive evaluation suites to measure and reduce these risks.
We present BenchmarkName, a novel benchmark to quantify LLM security risks and capabilities.
We evaluate multiple state-of-the-art (SOTA) LLMs, including GPT-4, Mistral, Meta Llama 3 70B-Instruct, and Code Llama.
arXiv Detail & Related papers (2024-04-19T20:11:12Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based
Agents [50.034049716274005]
We take the first step to investigate one of the typical safety threats, backdoor attack, to LLM-based agents.
We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis on the different forms of agent backdoor attacks.
We propose the corresponding data poisoning mechanisms to implement the above variations of agent backdoor attacks on two typical agent tasks.
arXiv Detail & Related papers (2024-02-17T06:48:45Z) - A Security Risk Taxonomy for Large Language Models [5.120567378386615]
This paper addresses a gap in current research by focusing on the security risks posed by large language models.
Our work proposes a taxonomy of security risks along the user-model communication pipeline.
We categorize the attacks by target and attack type within a prompt-based interaction scheme.
arXiv Detail & Related papers (2023-11-19T20:22:05Z) - Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs [59.596335292426105]
This paper collects the first open-source dataset to evaluate safeguards in large language models.
We train several BERT-like classifiers to achieve results comparable with GPT-4 on automatic safety evaluation.
arXiv Detail & Related papers (2023-08-25T14:02:12Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.