Governing AI Beyond the Pretraining Frontier
- URL: http://arxiv.org/abs/2502.15719v1
- Date: Mon, 27 Jan 2025 16:25:03 GMT
- Title: Governing AI Beyond the Pretraining Frontier
- Authors: Nicholas A. Caputo,
- Abstract summary: This year, jurisdictions worldwide, including the United States, the European Union, the United Kingdom, and China, are set to enact or revise laws governing frontier AI.<n>Yet growing evidence suggests that this "pretraining paradigm" may be hitting a wall and major AI companies are turning to alternative approaches.<n>This essay seeks to identify these challenges and point to new paths forward for regulation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This year, jurisdictions worldwide, including the United States, the European Union, the United Kingdom, and China, are set to enact or revise laws governing frontier AI. Their efforts largely rely on the assumption that increasing model scale through pretraining is the path to more advanced AI capabilities. Yet growing evidence suggests that this "pretraining paradigm" may be hitting a wall and major AI companies are turning to alternative approaches, like inference-time "reasoning," to boost capabilities instead. This paradigm shift presents fundamental challenges for the frontier AI governance frameworks that target pretraining scale as a key bottleneck useful for monitoring, control, and exclusion, threatening to undermine this new legal order as it emerges. This essay seeks to identify these challenges and point to new paths forward for regulation. First, we examine the existing frontier AI regulatory regime and analyze some key traits and vulnerabilities. Second, we introduce the concept of the "pretraining frontier," the capabilities threshold made possible by scaling up pretraining alone, and demonstrate how it could make the regulatory field more diffuse and complex and lead to new forms of competition. Third, we lay out a regulatory approach that focuses on increasing transparency and leveraging new natural technical bottlenecks to effectively oversee changing frontier AI development while minimizing regulatory burdens and protecting fundamental rights. Our analysis provides concrete mechanisms for governing frontier AI systems across diverse technical paradigms, offering policymakers tools for addressing both current and future regulatory challenges in frontier AI.
Related papers
- Frontier AI Risk Management Framework in Practice: A Risk Analysis Technical Report v1.5 [61.787178868669265]
This technical report presents an updated and granular assessment of five critical dimensions: cyber offense, persuasion and manipulation, strategic deception, uncontrolled AI R&D, and self-replication.<n>This work reflects our current understanding of AI frontier risks and urges collective action to mitigate these challenges.
arXiv Detail & Related papers (2026-02-16T04:30:06Z) - Global AI Governance Overview: Understanding Regulatory Requirements Across Global Jurisdictions [0.0]
The rapid advancement of general-purpose AI models has increased concerns about copyright infringement in training data.<n>This paper examines the regulatory landscape of AI training data governance in major jurisdictions, including the EU, the United States, and the Asia-Pacific region.<n>It also identifies critical gaps in enforcement mechanisms that threaten both creator rights and the sustainability of AI development.
arXiv Detail & Related papers (2025-11-26T13:59:11Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation [0.0]
We present a taxonomy to map the global landscape of AI regulation.<n>We apply this framework to five early movers: the European Union's AI Act, the United States' Executive Order 14110, Canada's AI and Data Act, China's Interim Measures for Generative AI Services, and Brazil's AI Bill 2338/2023.
arXiv Detail & Related papers (2025-05-19T19:23:41Z) - Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity [0.0]
This paper critically examines the evolving ethical and regulatory challenges posed by the integration of artificial intelligence in cybersecurity.<n>We trace the historical development of AI regulation, highlighting major milestones from theoretical discussions in the 1940s to the implementation of recent global frameworks such as the European Union AI Act.<n>Ethical concerns such as bias, transparency, accountability, privacy, and human oversight are explored in depth, along with their implications for AI-driven cybersecurity systems.
arXiv Detail & Related papers (2025-01-15T18:17:37Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Federated Learning Priorities Under the European Union Artificial
Intelligence Act [68.44894319552114]
We perform a first-of-its-kind interdisciplinary analysis (legal and ML) of the impact the AI Act may have on Federated Learning.
We explore data governance issues and the concern for privacy.
Most noteworthy are the opportunities to defend against data bias and enhance private and secure computation.
arXiv Detail & Related papers (2024-02-05T19:52:19Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Regulating Artificial Intelligence: Proposal for a Global Solution [6.037312672659089]
We argue that AI-related challenges cannot be tackled effectively without sincere international coordination.
We propose the establishment of an international AI governance framework organized around a new AI regulatory agency.
arXiv Detail & Related papers (2020-05-22T09:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.