LLM Platform Security: Applying a Systematic Evaluation Framework to
OpenAI's ChatGPT Plugins
- URL: http://arxiv.org/abs/2309.10254v1
- Date: Tue, 19 Sep 2023 02:20:10 GMT
- Title: LLM Platform Security: Applying a Systematic Evaluation Framework to
OpenAI's ChatGPT Plugins
- Authors: Umar Iqbal, Tadayoshi Kohno, Franziska Roesner
- Abstract summary: Large language model (LLM) platforms have recently begun offering a plugin ecosystem to interface with third-party services on the internet.
While these plugins extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted.
We propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms.
- Score: 35.60326624535449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language model (LLM) platforms, such as ChatGPT, have recently begun
offering a plugin ecosystem to interface with third-party services on the
internet. While these plugins extend the capabilities of LLM platforms, they
are developed by arbitrary third parties and thus cannot be implicitly trusted.
Plugins also interface with LLM platforms and users using natural language,
which can have imprecise interpretations. In this paper, we propose a framework
that lays a foundation for LLM platform designers to analyze and improve the
security, privacy, and safety of current and future plugin-integrated LLM
platforms. Our framework is a formulation of an attack taxonomy that is
developed by iteratively exploring how LLM platform stakeholders could leverage
their capabilities and responsibilities to mount attacks against each other. As
part of our iterative process, we apply our framework in the context of
OpenAI's plugin ecosystem. We uncover plugins that concretely demonstrate the
potential for the types of issues that we outline in our attack taxonomy. We
conclude by discussing novel challenges and by providing recommendations to
improve the security, privacy, and safety of present and future LLM-based
computing platforms.
Related papers
- Demystifying Platform Requirements for Diverse LLM Inference Use Cases [7.233203254714951]
We present an analytical tool, GenZ, to study the relationship between large language models inference performance and various platform design parameters.
We quantify the platform requirements to support SOTA LLMs models like LLaMA and GPT-4 under diverse serving settings.
Ultimately, this work sheds light on the platform design considerations for unlocking the full potential of large language models across a spectrum of applications.
arXiv Detail & Related papers (2024-06-03T18:00:50Z) - Attacks on Third-Party APIs of Large Language Models [15.823694509708302]
Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services.
This innovation enhances the capabilities of LLMs, but it also introduces risks.
This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services.
arXiv Detail & Related papers (2024-04-24T19:27:02Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - SecGPT: An Execution Isolation Architecture for LLM-Based Systems [37.47068167748932]
SecGPT aims to mitigate the security and privacy issues that arise with the execution of third-party apps.
We evaluate SecGPT against a number of case study attacks and demonstrate that it protects against many security, privacy, and safety issues.
arXiv Detail & Related papers (2024-03-08T00:02:30Z) - A New Era in LLM Security: Exploring Security Concerns in Real-World
LLM-based Systems [47.18371401090435]
We analyze the security of Large Language Model (LLM) systems, instead of focusing on the individual LLMs.
We propose a multi-layer and multi-step approach and apply it to the state-of-art OpenAI GPT4.
We found that although the OpenAI GPT4 has designed numerous safety constraints to improve its safety features, these safety constraints are still vulnerable to attackers.
arXiv Detail & Related papers (2024-02-28T19:00:12Z) - Video Understanding with Large Language Models: A Survey [97.29126722004949]
Given the remarkable capabilities of large language models (LLMs) in language and multimodal tasks, this survey provides a detailed overview of recent advancements in video understanding.
The emergent capabilities Vid-LLMs are surprisingly advanced, particularly their ability for open-ended multi-granularity reasoning.
This survey presents a comprehensive study of the tasks, datasets, benchmarks, and evaluation methodologies for Vid-LLMs.
arXiv Detail & Related papers (2023-12-29T01:56:17Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.