Generative AI regulation can learn from social media regulation
- URL: http://arxiv.org/abs/2412.11335v1
- Date: Sun, 15 Dec 2024 23:00:29 GMT
- Title: Generative AI regulation can learn from social media regulation
- Authors: Ruth Elisabeth Appel,
- Abstract summary: I argue that the debates on generative AI regulation can be informed by the debates and evidence on social media regulation.
I compare and contrast the affordances of generative AI and social media to highlight their similarities and differences.
I discuss specific policy recommendations based on the evolution of social media and their regulation.
- Score: 0.0
- License:
- Abstract: There is strong agreement that generative AI should be regulated, but strong disagreement on how to approach regulation. While some argue that AI regulation should mostly rely on extensions of existing laws, others argue that entirely new laws and regulations are needed to ensure that generative AI benefits society. In this paper, I argue that the debates on generative AI regulation can be informed by the debates and evidence on social media regulation. For example, AI companies have faced allegations of political bias regarding the images and text their models produce, similar to the allegations social media companies have faced regarding content ranking on their platforms. First, I compare and contrast the affordances of generative AI and social media to highlight their similarities and differences. Then, I discuss specific policy recommendations based on the evolution of social media and their regulation. These recommendations include investments in: efforts to counter bias and perceptions thereof (e.g., via transparency, researcher access, oversight boards, democratic input, research studies), specific areas of regulatory concern (e.g., youth wellbeing, election integrity) and trust and safety, computational social science research, and a more global perspective. Applying lessons learnt from social media regulation to generative AI regulation can save effort and time, and prevent avoidable mistakes.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Regulating Chatbot Output via Inter-Informational Competition [8.168523242105763]
This Article develops a yardstick for reevaluating both AI-related content risks and corresponding regulatory proposals.
It argues that sufficient competition among information outlets in the information marketplace can sufficiently mitigate and even resolve most content risks posed by generative AI technologies.
arXiv Detail & Related papers (2024-03-17T00:11:15Z) - Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations [0.0]
The study focuses on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology.
We show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
arXiv Detail & Related papers (2024-01-24T17:22:33Z) - AI Ethics and Ordoliberalism 2.0: Towards A 'Digital Bill of Rights' [0.0]
This article analyzes AI ethics from a distinct business ethics perspective, i.e., 'ordoliberalism 2.0'
It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct.
The paper suggests merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy.
arXiv Detail & Related papers (2023-10-27T10:26:12Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Both eyes open: Vigilant Incentives help Regulatory Markets improve AI
Safety [69.59465535312815]
Regulatory Markets for AI is a proposal designed with adaptability in mind.
It involves governments setting outcome-based targets for AI companies to achieve.
We warn that it is alarmingly easy to stumble on incentives which would prevent Regulatory Markets from achieving this goal.
arXiv Detail & Related papers (2023-03-06T14:42:05Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Data, Power and Bias in Artificial Intelligence [5.124256074746721]
Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty.
Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society.
This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains.
arXiv Detail & Related papers (2020-07-28T16:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.