Tracking GPTs Third Party Service: Automation, Analysis, and Insights
- URL: http://arxiv.org/abs/2506.17315v1
- Date: Wed, 18 Jun 2025 13:41:14 GMT
- Title: Tracking GPTs Third Party Service: Automation, Analysis, and Insights
- Authors: Chuan Yan, Liuhuo Wan, Bowei Guan, Fengqi Yu, Guangdong Bai, Jin Song Dong,
- Abstract summary: GPTs-ThirdSpy is an automated framework designed to extract privacy settings of GPTs.<n> GPTs-ThirdSpy provides academic researchers with real-time, reliable metadata on third-party services used by GPTs.
- Score: 9.269295824340858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ChatGPT has quickly advanced from simple natural language processing to tackling more sophisticated and specialized tasks. Drawing inspiration from the success of mobile app ecosystems, OpenAI allows developers to create applications that interact with third-party services, known as GPTs. GPTs can choose to leverage third-party services to integrate with specialized APIs for domain-specific applications. However, the way these disclose privacy setting information limits accessibility and analysis, making it challenging to systematically evaluate the data privacy implications of third-party integrate to GPTs. In order to support academic research on the integration of third-party services in GPTs, we introduce GPTs-ThirdSpy, an automated framework designed to extract privacy settings of GPTs. GPTs-ThirdSpy provides academic researchers with real-time, reliable metadata on third-party services used by GPTs, enabling in-depth analysis of their integration, compliance, and potential security risks. By systematically collecting and structuring this data, GPTs-ThirdSpy facilitates large-scale research on the transparency and regulatory challenges associated with the GPT app ecosystem.
Related papers
- Privacy and Security Threat for OpenAI GPTs [0.0]
Since OpenAI's release in November 2023, over 3 million custom GPTs have been created.<n>For developers, instruction leaking attacks threaten the intellectual property of instructions in custom GPTs.<n>For users, unwanted data access behavior by custom GPTs or integrated third-party services raises significant privacy concerns.
arXiv Detail & Related papers (2025-06-04T14:58:29Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)<n>FedE4RAG facilitates collaborative training of client-side RAG retrieval models.<n>We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.<n>First, we propose using standardized AI flaw reports and rules of engagement for researchers.<n>Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.<n>Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs [7.687215328455751]
We present a framework for the automated evaluation of Custom GPTs against OpenAI's usage policies.<n>We evaluate it through a large-scale study with 782 Custom GPTs across three categories: Romantic, Cybersecurity, and Academic GPTs.<n>The results reveal that 58.7% of the analyzed models exhibit indications of non-compliance, exposing weaknesses in the GPT store's review and approval processes.
arXiv Detail & Related papers (2025-02-03T15:19:28Z) - GPTZoo: A Large-scale Dataset of GPTs for the Research Community [5.1875389249043415]
GPTZoo is a large-scale dataset comprising 730,420 GPT instances.
Each instance includes rich metadata with 21 attributes describing its characteristics, as well as instructions, knowledge files, and third-party services utilized during its development.
arXiv Detail & Related papers (2024-05-24T15:17:03Z) - GPT Store Mining and Analysis [4.835306415626808]
The GPT Store serves as a marketplace for Generative Pre-trained Transformer (GPT) models.
This study focuses on the categorization of GPTs by topic, factors influencing GPT popularity, and the potential security risks.
Our findings aim to enhance understanding of the GPT ecosystem, providing valuable insights for future research, development, and policy-making in generative AI.
arXiv Detail & Related papers (2024-05-16T16:00:35Z) - A First Look at GPT Apps: Landscape and Vulnerability [14.869850673247631]
This study focuses on two GPT app stores: textitGPTStore.AI and the official textitOpenAI GPT Store.<n>Specifically, we develop two automated tools and a TriLevel configuration extraction strategy to efficiently gather metadata for all GPT apps across these two stores.<n>Our extensive analysis reveals: (1) the user enthusiasm for GPT apps consistently rises, whereas creator interest plateaus within three months of GPTs' launch; (2) nearly 90%% system prompts can be easily accessed due to widespread failure to secure GPT app configurations.
arXiv Detail & Related papers (2024-02-23T05:30:32Z) - MapGPT: Map-Guided Prompting with Adaptive Path Planning for Vision-and-Language Navigation [73.81268591484198]
Embodied agents equipped with GPT have exhibited extraordinary decision-making and generalization abilities across various tasks.
We present a novel map-guided GPT-based agent, dubbed MapGPT, which introduces an online linguistic-formed map to encourage global exploration.
Benefiting from this design, we propose an adaptive planning mechanism to assist the agent in performing multi-step path planning based on a map.
arXiv Detail & Related papers (2024-01-14T15:34:48Z) - Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs [27.97654690288698]
We conduct a comprehensive analysis of the security and privacy issues arising from the custom GPT platform by OpenAI.
Our systematic examination categorizes potential attack scenarios into three threat models based on the role of the malicious actor.
We identify 26 potential attack vectors, with 19 being partially or fully validated in real-world settings.
arXiv Detail & Related papers (2023-12-31T16:49:12Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models [92.6951708781736]
This work proposes a comprehensive trustworthiness evaluation for large language models with a focus on GPT-4 and GPT-3.5.
We find that GPT models can be easily misled to generate toxic and biased outputs and leak private information.
Our work illustrates a comprehensive trustworthiness evaluation of GPT models and sheds light on the trustworthiness gaps.
arXiv Detail & Related papers (2023-06-20T17:24:23Z) - Is GPT-4 a Good Data Analyst? [67.35956981748699]
We consider GPT-4 as a data analyst to perform end-to-end data analysis with databases from a wide range of domains.
We design several task-specific evaluation metrics to systematically compare the performance between several professional human data analysts and GPT-4.
Experimental results show that GPT-4 can achieve comparable performance to humans.
arXiv Detail & Related papers (2023-05-24T11:26:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.