Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation
- URL: http://arxiv.org/abs/2503.05737v1
- Date: Wed, 19 Feb 2025 15:59:09 GMT
- Title: Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation
- Authors: Lucie-Aimée Kaffee, Pepa Atanasova, Anna Rogers,
- Abstract summary: This paper examines AI policies in two domains, news organisations and universities, to understand how bottom-up governance approaches shape AI usage and oversight.<n>We identify key areas of convergence and divergence in how organisations address risks such as bias, privacy, misinformation, and accountability.<n>We argue that lessons from domain-specific AI policies can contribute to more adaptive and effective AI governance at the global level.
- Score: 22.476305606415995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid adoption of AI across diverse domains has led to the development of organisational guidelines that vary significantly, even within the same sector. This paper examines AI policies in two domains, news organisations and universities, to understand how bottom-up governance approaches shape AI usage and oversight. By analysing these policies, we identify key areas of convergence and divergence in how organisations address risks such as bias, privacy, misinformation, and accountability. We then explore the implications of these findings for international AI legislation, particularly the EU AI Act, highlighting gaps where practical policy insights could inform regulatory refinements. Our analysis reveals that organisational policies often address issues such as AI literacy, disclosure practices, and environmental impact, areas that are underdeveloped in existing international frameworks. We argue that lessons from domain-specific AI policies can contribute to more adaptive and effective AI governance at the global level. This study provides actionable recommendations for policymakers seeking to bridge the gap between local AI practices and international regulations.
Related papers
- Trustworthiness of Legal Considerations for the Use of LLMs in Education [0.0]
This paper offers a comparative analysis of AI-related regulatory and ethical frameworks across key global regions.<n>It maps how core trustworthiness principles, such as transparency, fairness, accountability, data privacy, and human oversight are embedded in regional legislation and AI governance structures.<n>The paper contributes practical guidance for building legally sound, ethically grounded, and culturally sensitive AI systems in education.
arXiv Detail & Related papers (2025-08-05T07:44:33Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products [0.0]
This study adopts a bottom-up approach to explore how governance-relevant themes are expressed in user discourse.<n> Drawing on over 100,000 user reviews of AI products from G2.com, we apply BERTopic to extract latent themes and identify those most semantically related to AI governance.
arXiv Detail & Related papers (2025-05-30T01:33:21Z) - Promising Topics for U.S.-China Dialogues on AI Risks and Governance [0.0]
Despite strategic competition, there exist concrete opportunities for bilateral U.S.-China cooperation in the development of responsible AI.<n>We analyze more than 40 primary AI policy and corporate governance documents from both nations.<n>Our analysis contributes to understanding how different international governance frameworks might be harmonized to promote global responsible AI development.
arXiv Detail & Related papers (2025-05-12T11:56:19Z) - AI Governance in the GCC States: A Comparative Analysis of National AI Strategies [0.0]
Gulf Cooperation Council (GCC) states increasingly adopt Artificial Intelligence (AI) to drive economic diversification and enhance services.<n>This paper investigates the evolving AI governance landscape across the six GCC nations, the United Arab Emirates, Saudi Arabia, Qatar, Oman, Bahrain, and Kuwait.<n>Findings highlight a "soft regulation" approach that emphasizes national strategies and ethical principles rather than binding regulations.
arXiv Detail & Related papers (2025-05-04T16:25:52Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Between Innovation and Oversight: A Cross-Regional Study of AI Risk Management Frameworks in the EU, U.S., UK, and China [0.0]
This paper conducts a comparative analysis of AI risk management strategies across the European Union, United States, United Kingdom (UK), and China.<n>The findings show that the EU implements a structured, risk-based framework that prioritizes transparency and conformity assessments.<n>The U.S. uses a decentralized, sector-specific regulations that promote innovation but may lead to fragmented enforcement.
arXiv Detail & Related papers (2025-02-25T18:52:17Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)<n>This article outlines the main building blocks of a model template for the FRIA.<n>It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - The potential functions of an international institution for AI safety. Insights from adjacent policy areas and recent trends [0.0]
The OECD, the G7, the G20, UNESCO, and the Council of Europe have already started developing frameworks for ethical and responsible AI governance.
This chapter reflects on what functions an international AI safety institute could perform.
arXiv Detail & Related papers (2024-08-31T10:04:53Z) - Operationalizing the Blueprint for an AI Bill of Rights: Recommendations for Practitioners, Researchers, and Policy Makers [20.16404495546234]
Several regulatory frameworks have been introduced by different countries worldwide.
Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools.
Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them.
We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research.
arXiv Detail & Related papers (2024-07-11T17:28:07Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - AI Federalism: Shaping AI Policy within States in Germany [0.0]
Recent AI governance research has focused heavily on the analysis of strategy papers and ethics guidelines for AI published by national governments and international bodies.
Subnational institutions have also published documents on Artificial Intelligence, yet these have been largely absent from policy analyses.
This is surprising because AI is connected to many policy areas, such as economic or research policy, where the competences are already distributed between the national and subnational level.
arXiv Detail & Related papers (2021-10-28T16:06:07Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.