Certified Safe: A Schematic for Approval Regulation of Frontier AI
- URL: http://arxiv.org/abs/2408.06210v1
- Date: Mon, 12 Aug 2024 15:01:03 GMT
- Title: Certified Safe: A Schematic for Approval Regulation of Frontier AI
- Authors: Cole Salvador,
- Abstract summary: An approval regulation scheme is one in which a firm cannot legally market, or in some cases develop, a product without explicit approval from a regulator.
This report proposes an approval regulation schematic for only the largest AI projects in which scrutiny begins before training and continues through to post-deployment monitoring.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent and unremitting capability advances have been accompanied by calls for comprehensive, rather than patchwork, regulation of frontier artificial intelligence (AI). Approval regulation is emerging as a promising candidate. An approval regulation scheme is one in which a firm cannot legally market, or in some cases develop, a product without explicit approval from a regulator on the basis of experiments performed upon the product that demonstrate its safety. This approach is used successfully by the FDA and FAA. Further, its application to frontier AI has been publicly supported by many prominent stakeholders. This report proposes an approval regulation schematic for only the largest AI projects in which scrutiny begins before training and continues through to post-deployment monitoring. The centerpieces of the schematic are two major approval gates, the first requiring approval for large-scale training and the second for deployment. Five main challenges make implementation difficult: noncompliance through unsanctioned deployment, specification of deployment readiness requirements, reliable model experimentation, filtering out safe models before the process, and minimizing regulatory overhead. This report makes a number of crucial recommendations to increase the feasibility of approval regulation, some of which must be followed urgently if such a regime is to succeed in the near future. Further recommendations, produced by this report's analysis, may improve the effectiveness of any regulatory regime for frontier AI.
Related papers
- Auction-Based Regulation for Artificial Intelligence [28.86995747151915]
We propose an auction-based regulatory mechanism to regulate AI safety.
We provably guarantee that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold.
Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively.
arXiv Detail & Related papers (2024-10-02T17:57:02Z) - Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence [0.0]
We explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI.
There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks.
We conclude by highlighting the role of policy learning and experimentation in regulatory development.
arXiv Detail & Related papers (2024-08-01T17:54:57Z) - From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Rethinking Legal Compliance Automation: Opportunities with Large Language Models [2.9088208525097365]
We argue that the examination of (textual) legal artifacts should, first employ broader context than sentences.
We present a compliance analysis approach designed to address these limitations.
arXiv Detail & Related papers (2024-04-22T17:10:27Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Validation-Driven Development [54.50263643323]
This paper introduces a validation-driven development (VDD) process that prioritizes validating requirements in formal development.
The effectiveness of the VDD process is demonstrated through a case study in the aviation industry.
arXiv Detail & Related papers (2023-08-11T09:15:26Z) - Frontier AI Regulation: Managing Emerging Risks to Public Safety [15.85618115026625]
"Frontier AI" models could possess dangerous capabilities sufficient to pose severe risks to public safety.
Industry self-regulation is an important first step.
We propose an initial set of safety standards.
arXiv Detail & Related papers (2023-07-06T17:03:25Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.