Security study based on the Chatgptplugin system: ldentifying Security Vulnerabilities
- URL: http://arxiv.org/abs/2507.21128v1
- Date: Mon, 21 Jul 2025 04:59:54 GMT
- Title: Security study based on the Chatgptplugin system: ldentifying Security Vulnerabilities
- Authors: Ruomai Ren,
- Abstract summary: ChatGPT has become a popular large-scale language modelling platform, and its plugin system is also gradually developing.<n>This study aims to analyse the security of plugins in the ChatGPT plugin shop, reveal its major security vulnerabilities, and propose corresponding improvements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Plugin systems are a class of external programmes that provide users with a wide range of functionality, and while they enhance the user experience, their security is always a challenge. Especially due to the diversity and complexity of developers, many plugin systems lack adequate regulation. As ChatGPT has become a popular large-scale language modelling platform, its plugin system is also gradually developing, and the open platform provides creators with the opportunity to upload plugins covering a wide range of application scenarios. However, current research and discussions mostly focus on the security issues of the ChatGPT model itself, while ignoring the possible security risks posed by the plugin system. This study aims to analyse the security of plugins in the ChatGPT plugin shop, reveal its major security vulnerabilities, and propose corresponding improvements.
Related papers
- Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - Enabling Security on the Edge: A CHERI Compartmentalized Network Stack [42.78181795494584]
CHERI provides strong security from the hardware level by enabling fine-grained compartmentalization and memory protection.<n>Our case study examines the trade-offs of isolating applications, TCP/IP libraries, and network drivers on a CheriBSD system deployed on the Arm Morello platform.
arXiv Detail & Related papers (2025-07-07T09:37:59Z) - Progent: Programmable Privilege Control for LLM Agents [46.49787947705293]
We introduce Progent, the first privilege control mechanism for LLM agents.<n>At its core is a domain-specific language for flexibly expressing privilege control policies applied during agent execution.<n>This enables agent developers and users to craft suitable policies for their specific use cases and enforce them deterministically to guarantee security.
arXiv Detail & Related papers (2025-04-16T01:58:40Z) - Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.<n>We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.<n>As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context [45.821481786228226]
We show that situation-driven adversarial full-prompts that leverage situational context are effective but much harder to detect.<n>We developed attacks that use movie scripts as situational contextual frameworks.<n>We enhanced the AdvPrompter framework with p-nucleus sampling to generate diverse human-readable adversarial texts.
arXiv Detail & Related papers (2024-12-20T21:43:52Z) - Exploring ChatGPT App Ecosystem: Distribution, Deployment and Security [3.0924093890016904]
ChatGPT has enabled third-party developers to create plugins to expand ChatGPT's capabilities.
We conduct the first comprehensive study of the ChatGPT app ecosystem, aiming to illuminate its landscape for our research community.
We uncover an uneven distribution of functionality among ChatGPT plugins, highlighting prevalent and emerging topics.
arXiv Detail & Related papers (2024-08-26T15:31:58Z) - Evaluating the Language-Based Security for Plugin Development [0.0]
We aim to enhance the understanding of access control vulnerabilities in plugins and explore effective security measures.
We developed and evaluated test plugins to assess the security mechanisms in popular development environments.
A comparative analysis is conducted to evaluate the effectiveness of capability-based systems in addressing access control vulnerabilities.
arXiv Detail & Related papers (2024-05-13T03:12:24Z) - Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs [27.97654690288698]
We conduct a comprehensive analysis of the security and privacy issues arising from the custom GPT platform by OpenAI.
Our systematic examination categorizes potential attack scenarios into three threat models based on the role of the malicious actor.
We identify 26 potential attack vectors, with 19 being partially or fully validated in real-world settings.
arXiv Detail & Related papers (2023-12-31T16:49:12Z) - Exploring the Limits of ChatGPT in Software Security Applications [29.829574588773486]
Large language models (LLMs) have undergone rapid evolution and achieved remarkable results in recent times.
OpenAI's ChatGPT has gained instant popularity due to its strong capability across a wide range of tasks.
arXiv Detail & Related papers (2023-12-08T03:02:37Z) - LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins [31.678328189420483]
Large language model (LLM) platforms have recently begun offering an app ecosystem to interface with third-party services on the internet.
While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted.
We propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms.
arXiv Detail & Related papers (2023-09-19T02:20:10Z) - Prompt-Enhanced Software Vulnerability Detection Using ChatGPT [9.35868869848051]
Large language models (LLMs) like GPT have received considerable attention due to their stunning intelligence.
This paper launches a study on the performance of software vulnerability detection using ChatGPT with different prompt designs.
arXiv Detail & Related papers (2023-08-24T10:30:33Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.