Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs
- URL: http://arxiv.org/abs/2401.00905v1
- Date: Sun, 31 Dec 2023 16:49:12 GMT
- Title: Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs
- Authors: Guanhong Tao, Siyuan Cheng, Zhuo Zhang, Junmin Zhu, Guangyu Shen, Xiangyu Zhang,
- Abstract summary: We conduct a comprehensive analysis of the security and privacy issues arising from the custom GPT platform by OpenAI.
Our systematic examination categorizes potential attack scenarios into three threat models based on the role of the malicious actor.
We identify 26 potential attack vectors, with 19 being partially or fully validated in real-world settings.
- Score: 27.97654690288698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of large language models (LLMs) has significantly accelerated the development of a wide range of applications across various fields. There is a growing trend in the construction of specialized platforms based on LLMs, such as the newly introduced custom GPTs by OpenAI. While custom GPTs provide various functionalities like web browsing and code execution, they also introduce significant security threats. In this paper, we conduct a comprehensive analysis of the security and privacy issues arising from the custom GPT platform. Our systematic examination categorizes potential attack scenarios into three threat models based on the role of the malicious actor, and identifies critical data exchange channels in custom GPTs. Utilizing the STRIDE threat modeling framework, we identify 26 potential attack vectors, with 19 being partially or fully validated in real-world settings. Our findings emphasize the urgent need for robust security and privacy measures in the custom GPT ecosystem, especially in light of the forthcoming launch of the official GPT store by OpenAI.
Related papers
- Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks [65.84623493488633]
This paper conducts a rigorous evaluation of GPT-4o against jailbreak attacks.
The newly introduced audio modality opens up new attack vectors for jailbreak attacks on GPT-4o.
Existing black-box multimodal jailbreak attack methods are largely ineffective against GPT-4o and GPT-4V.
arXiv Detail & Related papers (2024-06-10T14:18:56Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - GPT Store Mining and Analysis [4.835306415626808]
The GPT Store serves as a marketplace for Generative Pre-trained Transformer (GPT) models.
This study focuses on the categorization of GPTs by topic, factors influencing GPT popularity, and the potential security risks.
Our findings aim to enhance understanding of the GPT ecosystem, providing valuable insights for future research, development, and policy-making in generative AI.
arXiv Detail & Related papers (2024-05-16T16:00:35Z) - Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models [20.92843974858305]
GPT models are increasingly being used for task optimization.
In this paper, we introduce a straightforward yet potent Conversation Reconstruction Attack.
We present two advanced attacks targeting improved reconstruction of past conversations.
arXiv Detail & Related papers (2024-02-05T13:18:42Z) - Exploring the Limits of ChatGPT in Software Security Applications [29.829574588773486]
Large language models (LLMs) have undergone rapid evolution and achieved remarkable results in recent times.
OpenAI's ChatGPT has gained instant popularity due to its strong capability across a wide range of tasks.
arXiv Detail & Related papers (2023-12-08T03:02:37Z) - Assessing Prompt Injection Risks in 200+ Custom GPTs [21.86130076843285]
This study reveals a significant security vulnerability inherent in user-customized GPTs: prompt injection attacks.
Through prompt injection, an adversary can not only extract the customized system prompts but also access the uploaded files.
This paper provides a first-hand analysis of the prompt injection, alongside the evaluation of the possible mitigation of such attacks.
arXiv Detail & Related papers (2023-11-20T04:56:46Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models [92.6951708781736]
This work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5.
We find that GPT models can be easily misled to generate toxic and biased outputs and leak private information.
Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps.
arXiv Detail & Related papers (2023-06-20T17:24:23Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.