System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
- URL: http://arxiv.org/abs/2409.19091v2
- Date: Thu, 10 Oct 2024 15:29:07 GMT
- Title: System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
- Authors: Fangzhou Wu, Ethan Cecchetti, Chaowei Xiao,
- Abstract summary: Large Language Model-based systems (LLM systems) are information and query processing systems.
We present a system-level defense based on the principles of information flow control that we call an f-secure LLM system.
- Score: 24.583984374370342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Model-based systems (LLM systems) are information and query processing systems that use LLMs to plan operations from natural-language prompts and feed the output of each successive step into the LLM to plan the next. This structure results in powerful tools that can process complex information from diverse sources but raises critical security concerns. Malicious information from any source may be processed by the LLM and can compromise the query processing, resulting in nearly arbitrary misbehavior. To tackle this problem, we present a system-level defense based on the principles of information flow control that we call an f-secure LLM system. An f-secure LLM system disaggregates the components of an LLM system into a context-aware pipeline with dynamically generated structured executable plans, and a security monitor filters out untrusted input into the planning process. This structure prevents compromise while maximizing flexibility. We provide formal models for both existing LLM systems and our f-secure LLM system, allowing analysis of critical security guarantees. We further evaluate case studies and benchmarks showing that f-secure LLM systems provide robust security while preserving functionality and efficiency. Our code is released at https://github.com/fzwark/Secure_LLM_System.
Related papers
- Large Language Model Supply Chain: Open Problems From the Security Perspective [25.320736806895976]
Large Language Model (LLM) is changing the software development paradigm and has gained huge attention from both academia and industry.
We take the first step to discuss the potential security risks in each component as well as the integration between components of LLM SC.
arXiv Detail & Related papers (2024-11-03T15:20:21Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - Prompt Leakage effect and defense strategies for multi-turn LLM interactions [95.33778028192593]
Leakage of system prompts may compromise intellectual property and act as adversarial reconnaissance for an attacker.
We design a unique threat model which leverages the LLM sycophancy effect and elevates the average attack success rate (ASR) from 17.7% to 86.2% in a multi-turn setting.
We measure the mitigation effect of 7 black-box defense strategies, along with finetuning an open-source model to defend against leakage attempts.
arXiv Detail & Related papers (2024-04-24T23:39:58Z) - A New Era in LLM Security: Exploring Security Concerns in Real-World
LLM-based Systems [47.18371401090435]
We analyze the security of Large Language Model (LLM) systems, instead of focusing on the individual LLMs.
We propose a multi-layer and multi-step approach and apply it to the state-of-art OpenAI GPT4.
We found that although the OpenAI GPT4 has designed numerous safety constraints to improve its safety features, these safety constraints are still vulnerable to attackers.
arXiv Detail & Related papers (2024-02-28T19:00:12Z) - Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
Model Systems [29.828997665535336]
Large language models (LLMs) have strong capabilities in solving diverse natural language processing tasks.
However, the safety and security issues of LLM systems have become the major obstacle to their widespread application.
This paper proposes a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system.
arXiv Detail & Related papers (2024-01-11T09:29:56Z) - ChatSOS: LLM-based knowledge Q&A system for safety engineering [0.0]
This study introduces an LLM-based Q&A system for safety engineering, enhancing the comprehension and response accuracy of the model.
We employ prompt engineering to incorporate external knowledge databases, thus enriching the LLM with up-to-date and reliable information.
Our findings indicate that the integration of external knowledge significantly augments the capabilities of LLM for in-depth problem analysis and autonomous task assignment.
arXiv Detail & Related papers (2023-12-14T03:25:23Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs [59.596335292426105]
This paper collects the first open-source dataset to evaluate safeguards in large language models.
We train several BERT-like classifiers to achieve results comparable with GPT-4 on automatic safety evaluation.
arXiv Detail & Related papers (2023-08-25T14:02:12Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.