Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions
- URL: http://arxiv.org/abs/2512.02046v1
- Date: Wed, 26 Nov 2025 13:59:11 GMT
- Title: Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions
- Authors: Mariia Kyrychenko, Mykyta Mudryi, Markiyan Chaklosh,
- Abstract summary: The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data.<n>This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region.<n>It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data, yet current regulatory frameworks remain predominantly reactive rather than proactive. This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region. It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development. Through analysis of major cases we identified critical gaps in pre-training data filtering. Existing solutions such as transparency tools, perceptual hashing, and access control mechanisms address only specific aspects of the problem and cannot prevent initial copyright violations. We identify two fundamental challenges: pre-training license collection and content filtering, which faces the impossibility of comprehensive copyright management at scale, and verification mechanisms, which lack tools to confirm filtering prevented infringement. We propose a multilayered filtering pipeline that combines access control, content verification, machine learning classifiers, and continuous database cross-referencing to shift copyright protection from post-training detection to pre-training prevention. This approach offers a pathway toward protecting creator rights while enabling continued AI innovation.
Related papers
- Copyright in AI Pre-Training Data Filtering: Regulatory Landscape and Mitigation Strategies [0.0]
The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data.<n>This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region.<n>It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development.
arXiv Detail & Related papers (2025-11-26T14:02:45Z) - Anti-Regulatory AI: How "AI Safety" is Leveraged Against Regulatory Oversight [0.9883261192383612]
AI companies increasingly develop and deploy privacy-enhancing technologies, bias-constraining measures, evaluation frameworks, and alignment techniques.<n>This paper examines the ulterior function of these technologies as mechanisms of legal influence.
arXiv Detail & Related papers (2025-09-26T19:35:10Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [138.07763415496288]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Toward a Global Regime for Compute Governance: Building the Pause Button [0.4952055253916912]
We propose a governance system designed to prevent AI systems from being trained by restricting access to computational resources.<n>We identify three key intervention points -- technical, traceability, and regulatory -- and organize them within a Governance--Enforcement--Verification framework.<n> Technical mechanisms include tamper-proof FLOP caps, model locking, and offline licensing.
arXiv Detail & Related papers (2025-06-25T15:18:19Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.<n>First, we propose using standardized AI flaw reports and rules of engagement for researchers.<n>Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.<n>Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Governing AI Beyond the Pretraining Frontier [0.0]
This year, jurisdictions worldwide, including the United States, the European Union, the United Kingdom, and China, are set to enact or revise laws governing frontier AI.<n>Yet growing evidence suggests that this "pretraining paradigm" may be hitting a wall and major AI companies are turning to alternative approaches.<n>This essay seeks to identify these challenges and point to new paths forward for regulation.
arXiv Detail & Related papers (2025-01-27T16:25:03Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation [14.704747149179047]
We argue that compute providers should have legal obligations and ethical responsibilities associated with AI development and deployment.
Compute providers can play an essential role in a regulatory ecosystem via four key capacities.
arXiv Detail & Related papers (2024-03-13T13:08:16Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.