Regulating ChatGPT and other Large Generative AI Models
- URL: http://arxiv.org/abs/2302.02337v8
- Date: Fri, 12 May 2023 11:35:23 GMT
- Title: Regulating ChatGPT and other Large Generative AI Models
- Authors: Philipp Hacker, Andreas Engel, Marco Mauer
- Abstract summary: Large generative AI models (LGAIMs) are rapidly transforming the way we communicate, illustrate, and create.
This paper will situate these new generative models in the current debate on trustworthy AI regulation.
It suggests a novel terminology to capture the AI value chain in LGAIM settings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable
Diffusion, are rapidly transforming the way we communicate, illustrate, and
create. However, AI regulation, in the EU and beyond, has primarily focused on
conventional AI models, not LGAIMs. This paper will situate these new
generative models in the current debate on trustworthy AI regulation, and ask
how the law can be tailored to their capabilities. After laying technical
foundations, the legal part of the paper proceeds in four steps, covering (1)
direct regulation, (2) data protection, (3) content moderation, and (4) policy
proposals. It suggests a novel terminology to capture the AI value chain in
LGAIM settings by differentiating between LGAIM developers, deployers,
professional and non-professional users, as well as recipients of LGAIM output.
We tailor regulatory duties to these different actors along the value chain and
suggest strategies to ensure that LGAIMs are trustworthy and deployed for the
benefit of society at large. Rules in the AI Act and other direct regulation
must match the specificities of pre-trained models. The paper argues for three
layers of obligations concerning LGAIMs (minimum standards for all LGAIMs;
high-risk obligations for high-risk use cases; collaborations along the AI
value chain). In general, regulation should focus on concrete high-risk
applications, and not the pre-trained model itself, and should include (i)
obligations regarding transparency and (ii) risk management. Non-discrimination
provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core
of the DSA content moderation rules should be expanded to cover LGAIMs. This
includes notice and action mechanisms, and trusted flaggers. In all areas,
regulators and lawmakers need to act fast to keep track with the dynamics of
ChatGPT et al.
Related papers
- From Principles to Rules: A Regulatory Approach for Frontier AI [2.1764247401772705]
Regulators may require frontier AI developers to adopt safety measures.
The requirements could be formulated as high-level principles or specific rules.
These regulatory approaches, known as 'principle-based' and 'rule-based' regulation, have complementary strengths and weaknesses.
arXiv Detail & Related papers (2024-07-10T01:45:15Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Taxonomy to Regulation: A (Geo)Political Taxonomy for AI Risks and Regulatory Measures in the EU AI Act [0.0]
This work proposes a taxonomy focusing on (geo)political risks associated with AI.
It identifies 12 risks in total divided into four categories: (1) Geopolitical Pressures, (2) Malicious Usage, (3) Environmental, Social, and Ethical Risks, and (4) Privacy and Trust Violations.
arXiv Detail & Related papers (2024-04-17T15:32:56Z) - SoFA: Shielded On-the-fly Alignment via Priority Rule Following [90.32819418613407]
This paper introduces a novel alignment paradigm, priority rule following, which defines rules as the primary control mechanism in each dialog.
We present PriorityDistill, a semi-automated approach for distilling priority following signals from simulations to ensure robust rule integration and adherence.
arXiv Detail & Related papers (2024-02-27T09:52:27Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Frontier AI Regulation: Managing Emerging Risks to Public Safety [15.85618115026625]
"Frontier AI" models could possess dangerous capabilities sufficient to pose severe risks to public safety.
Industry self-regulation is an important first step.
We propose an initial set of safety standards.
arXiv Detail & Related papers (2023-07-06T17:03:25Z) - Statutory Professions in AI governance and their consequences for
explainable AI [2.363388546004777]
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
arXiv Detail & Related papers (2023-06-15T08:51:28Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.