Communication Bias in Large Language Models: A Regulatory Perspective
- URL: http://arxiv.org/abs/2509.21075v1
- Date: Thu, 25 Sep 2025 12:25:06 GMT
- Title: Communication Bias in Large Language Models: A Regulatory Perspective
- Authors: Adrian Kuenzler, Stefan Schmid,
- Abstract summary: This paper reviews risks of biased outputs and their societal impact, focusing on frameworks like the EU's AI Act and the Digital Services Act.<n>We argue that beyond constant regulation, stronger attention to competition and design governance is needed to ensure fair, trustworthy AI.
- Score: 4.824034405285729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) are increasingly central to many applications, raising concerns about bias, fairness, and regulatory compliance. This paper reviews risks of biased outputs and their societal impact, focusing on frameworks like the EU's AI Act and the Digital Services Act. We argue that beyond constant regulation, stronger attention to competition and design governance is needed to ensure fair, trustworthy AI. This is a preprint of the Communications of the ACM article of the same title.
Related papers
- Will Power Return to the Clouds? From Divine Authority to GenAI Authority [0.2864713389096699]
Generative AI systems now mediate newsfeeds, search rankings, and creative content for hundreds of millions of users.<n>This article juxtaposes the Galileo Affair, a touchstone of clerical knowledge control, with contemporary Big-Tech content moderation.
arXiv Detail & Related papers (2025-11-27T18:59:44Z) - Enhancements for Developing a Comprehensive AI Fairness Assessment Standard [1.9662978733004601]
This paper proposes an expansion of the TEC Standard to include fairness assessments for images, unstructured text, and generative AI.<n>By incorporating these dimensions, the enhanced framework will promote responsible and trustworthy AI deployment across various sectors.
arXiv Detail & Related papers (2025-04-10T07:24:23Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Generative AI as Digital Media [0.0]
Generative AI is frequently portrayed as revolutionary or even apocalyptic.<n>This essay argues that such views are misguided.<n>Instead, generative AI should be understood as an evolutionary step in the broader algorithmic media landscape.
arXiv Detail & Related papers (2025-03-09T08:58:17Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Regulating Chatbot Output via Inter-Informational Competition [8.168523242105763]
This Article develops a yardstick for reevaluating both AI-related content risks and corresponding regulatory proposals.
It argues that sufficient competition among information outlets in the information marketplace can sufficiently mitigate and even resolve most content risks posed by generative AI technologies.
arXiv Detail & Related papers (2024-03-17T00:11:15Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.