Dataset Ownership in the Era of Large Language Models
- URL: http://arxiv.org/abs/2509.05921v1
- Date: Sun, 07 Sep 2025 04:35:08 GMT
- Title: Dataset Ownership in the Era of Large Language Models
- Authors: Kun Li, Cheng Wang, Minghui Xu, Yue Zhang, Xiuzhen Cheng,
- Abstract summary: Traditional legal mechanisms fail to address the technical complexities of digital data replication and unauthorized use.<n>This survey provides a comprehensive review of technical approaches for dataset copyright protection.<n>We synthesize key techniques, analyze their strengths and limitations, and highlight open research challenges.
- Score: 25.006308155332178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As datasets become critical assets in modern machine learning systems, ensuring robust copyright protection has emerged as an urgent challenge. Traditional legal mechanisms often fail to address the technical complexities of digital data replication and unauthorized use, particularly in opaque or decentralized environments. This survey provides a comprehensive review of technical approaches for dataset copyright protection, systematically categorizing them into three main classes: non-intrusive methods, which detect unauthorized use without modifying data; minimally-intrusive methods, which embed lightweight, reversible changes to enable ownership verification; and maximally-intrusive methods, which apply aggressive data alterations, such as reversible adversarial examples, to enforce usage restrictions. We synthesize key techniques, analyze their strengths and limitations, and highlight open research challenges. This work offers an organized perspective on the current landscape and suggests future directions for developing unified, scalable, and ethically sound solutions to protect datasets in increasingly complex machine learning ecosystems.
Related papers
- A Systematic Survey of Model Extraction Attacks and Defenses: State-of-the-Art and Perspectives [65.3369988566853]
Recent studies have demonstrated that adversaries can replicate a target model's functionality.<n>Model Extraction Attacks pose threats to intellectual property, privacy, and system security.<n>We propose a novel taxonomy that classifies MEAs according to attack mechanisms, defense approaches, and computing environments.
arXiv Detail & Related papers (2025-08-20T19:49:59Z) - DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [70.77570343385928]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - A Novel Access Control and Privacy-Enhancing Approach for Models in Edge Computing [0.26107298043931193]
We propose a novel model access control method tailored for edge computing environments.
This method leverages image style as a licensing mechanism, embedding style recognition into the model's operational framework.
By restricting the input data to the edge model, this approach not only prevents attackers from gaining unauthorized access to the model but also enhances the privacy of data on terminal devices.
arXiv Detail & Related papers (2024-11-06T11:37:30Z) - SoK: Dataset Copyright Auditing in Machine Learning Systems [23.00196984807359]
This paper examines the current dataset copyright auditing tools, examining their effectiveness and limitations.<n>We categorize dataset copyright auditing research into two prominent strands: intrusive methods and non-intrusive methods.<n>To summarize our results, we offer detailed reference tables, highlight key points, and pinpoint unresolved issues in the current literature.
arXiv Detail & Related papers (2024-10-22T02:06:38Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - MaSS: Multi-attribute Selective Suppression for Utility-preserving Data Transformation from an Information-theoretic Perspective [10.009178591853058]
We propose a formal information-theoretic definition for this utility-preserving privacy protection problem.
We design a data-driven learnable data transformation framework that is capable of suppressing sensitive attributes from target datasets.
Results demonstrate the effectiveness and generalizability of our method under various configurations.
arXiv Detail & Related papers (2024-05-23T18:35:46Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.