Defending Compute Thresholds Against Legal Loopholes
- URL: http://arxiv.org/abs/2502.00003v1
- Date: Fri, 03 Jan 2025 12:07:21 GMT
- Title: Defending Compute Thresholds Against Legal Loopholes
- Authors: Matteo Pistillo, Pablo Villalobos,
- Abstract summary: Existing legal frameworks on AI rely on training compute thresholds as a proxy to identify potentially-dangerous AI models.<n>We examine some enhancement techniques that are capable of decreasing training compute usage while preserving, or even increasing, model capabilities.<n>These capability-enhancing and compute-saving techniques could constitute a legal loophole to existing training compute thresholds.
- Score: 0.7234862895932991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing legal frameworks on AI rely on training compute thresholds as a proxy to identify potentially-dangerous AI models and trigger increased regulatory attention. In the United States, Section 4.2(a) of Executive Order 14110 instructs the Secretary of Commerce to require extensive reporting from developers of AI models above a certain training compute threshold. In the European Union, Article 51 of the AI Act establishes a presumption that AI models above a certain compute threshold have high impact capabilities and hence pose systemic risk, thus subjecting their developers to several obligations including capability evaluations, reporting, and incident monitoring. In this paper, we examine some enhancement techniques that are capable of decreasing training compute usage while preserving, or even increasing, model capabilities. Since training compute thresholds rely on training compute as a metric and trigger for increased regulatory attention, these capability-enhancing and compute-saving techniques could constitute a legal loophole to existing training compute thresholds. In particular, we concentrate on four illustrative techniques (fine-tuning, model reuse, model expansion, and above compute-optimal inference compute) with the goal of furthering the conversation about their implications on training compute thresholds as a legal mechanism and advancing policy recommendations that could address the relevant legal loopholes.
Related papers
- General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.
We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.
Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - Inference Scaling Reshapes AI Governance [0.0]
The shift from scaling up the pre-training compute of AI systems to scaling up their inference compute may have profound effects on AI governance.
The nature of these effects depends crucially on whether this new inference compute will primarily be used during external deployment or as part of a more complex training programme within the lab.
arXiv Detail & Related papers (2025-02-12T22:04:16Z) - A Comprehensive Framework for Reliable Legal AI: Combining Specialized Expert Systems and Adaptive Refinement [0.0]
Article proposes a novel framework combining expert systems with a knowledge-based architecture to improve the precision and contextual relevance of AI-driven legal services.
This framework utilizes specialized modules, each focusing on specific legal areas, and incorporates structured operational guidelines to enhance decision-making.
The proposed approach demonstrates significant improvements over existing AI models, showcasing enhanced performance in legal tasks and offering a scalable solution to provide more accessible and affordable legal services.
arXiv Detail & Related papers (2024-12-29T14:00:11Z) - The Dilemma of Uncertainty Estimation for General Purpose AI in the EU AI Act [6.9060054915724]
The AI act is the European Union-wide regulation of AI systems.
We argue that uncertainty estimation should be a required component for deploying models in the real world.
arXiv Detail & Related papers (2024-08-20T23:59:51Z) - Training Compute Thresholds: Features and Functions in AI Regulation [0.7234862895932991]
Regulators in the US and EU are using thresholds based on training compute to identify GPAI models that may pose risks of large-scale societal harm.
We argue that training compute currently is the most suitable metric to identify GPAI models that deserve regulatory oversight and further scrutiny.
As GPAI technology and market structures evolve, regulators should update compute thresholds and complement them with other metrics into regulatory review processes.
arXiv Detail & Related papers (2024-05-17T14:10:24Z) - Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation [14.704747149179047]
We argue that compute providers should have legal obligations and ethical responsibilities associated with AI development and deployment.
Compute providers can play an essential role in a regulatory ecosystem via four key capacities.
arXiv Detail & Related papers (2024-03-13T13:08:16Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks [73.88590165742721]
We propose a novel adversarial training technique that exploits auxiliary tasks under a limited set of training data.
Our approach extends single-task models into multi-task models during the min-max optimization of adversarial training.
We demonstrate that guided multi-task learning is an actionable and promising avenue to push further the boundaries of model robustness.
arXiv Detail & Related papers (2023-02-06T16:23:24Z) - Flexible Attention-Based Multi-Policy Fusion for Efficient Deep
Reinforcement Learning [78.31888150539258]
Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning.
Prior studies in RL have incorporated external knowledge policies to help agents improve sample efficiency.
We present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.
arXiv Detail & Related papers (2022-10-07T17:56:57Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.