Regulating ChatGPT and other Large Generative AI Models
- URL: http://arxiv.org/abs/2302.02337v8
- Date: Fri, 12 May 2023 11:35:23 GMT
- Title: Regulating ChatGPT and other Large Generative AI Models
- Authors: Philipp Hacker, Andreas Engel, Marco Mauer
- Abstract summary: Large generative AI models (LGAIMs) are rapidly transforming the way we communicate, illustrate, and create.
This paper will situate these new generative models in the current debate on trustworthy AI regulation.
It suggests a novel terminology to capture the AI value chain in LGAIM settings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable
Diffusion, are rapidly transforming the way we communicate, illustrate, and
create. However, AI regulation, in the EU and beyond, has primarily focused on
conventional AI models, not LGAIMs. This paper will situate these new
generative models in the current debate on trustworthy AI regulation, and ask
how the law can be tailored to their capabilities. After laying technical
foundations, the legal part of the paper proceeds in four steps, covering (1)
direct regulation, (2) data protection, (3) content moderation, and (4) policy
proposals. It suggests a novel terminology to capture the AI value chain in
LGAIM settings by differentiating between LGAIM developers, deployers,
professional and non-professional users, as well as recipients of LGAIM output.
We tailor regulatory duties to these different actors along the value chain and
suggest strategies to ensure that LGAIMs are trustworthy and deployed for the
benefit of society at large. Rules in the AI Act and other direct regulation
must match the specificities of pre-trained models. The paper argues for three
layers of obligations concerning LGAIMs (minimum standards for all LGAIMs;
high-risk obligations for high-risk use cases; collaborations along the AI
value chain). In general, regulation should focus on concrete high-risk
applications, and not the pre-trained model itself, and should include (i)
obligations regarding transparency and (ii) risk management. Non-discrimination
provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core
of the DSA content moderation rules should be expanded to cover LGAIMs. This
includes notice and action mechanisms, and trusted flaggers. In all areas,
regulators and lawmakers need to act fast to keep track with the dynamics of
ChatGPT et al.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Auction-Based Regulation for Artificial Intelligence [28.86995747151915]
We propose an auction-based regulatory mechanism to regulate AI safety.
We provably guarantee that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold.
Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively.
arXiv Detail & Related papers (2024-10-02T17:57:02Z) - From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act [0.0]
This work proposes a taxonomy focusing on (geo)political risks associated with AI.
It identifies 12 risks in total divided into four categories: (1) Geopolitical Pressures, (2) Malicious Usage, (3) Environmental, Social, and Ethical Risks, and (4) Privacy and Trust Violations.
arXiv Detail & Related papers (2024-04-17T15:32:56Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Frontier AI Regulation: Managing Emerging Risks to Public Safety [15.85618115026625]
"Frontier AI" models could possess dangerous capabilities sufficient to pose severe risks to public safety.
Industry self-regulation is an important first step.
We propose an initial set of safety standards.
arXiv Detail & Related papers (2023-07-06T17:03:25Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.