Navigating AI Policy Landscapes: Insights into Human Rights Considerations Across IEEE Regions
- URL: http://arxiv.org/abs/2504.19264v1
- Date: Sun, 27 Apr 2025 14:44:42 GMT
- Title: Navigating AI Policy Landscapes: Insights into Human Rights Considerations Across IEEE Regions
- Authors: Angel Mary John, Jerrin Thomas Panachakel, Anusha S. P,
- Abstract summary: This paper explores the integration of human rights considerations into AI regulatory frameworks across different IEEE regions.<n>The U.S. promotes innovation with less restrictive regulations, while Europe exhibits stringent protections for individual rights.<n>China emphasizes state control and societal order in its AI strategies, while Singapore's advisory framework encourages self-regulation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the integration of human rights considerations into AI regulatory frameworks across different IEEE regions - specifically the United States (Region 1-6), Europe (Region 8), China (part of Region 10), and Singapore (part of Region 10). While all acknowledge the transformative potential of AI and the necessity of ethical guidelines, their regulatory approaches significantly differ. Europe exhibits a rigorous framework with stringent protections for individual rights, while the U.S. promotes innovation with less restrictive regulations. China emphasizes state control and societal order in its AI strategies. In contrast, Singapore's advisory framework encourages self-regulation and aligns closely with international norms. This comparative analysis underlines the need for ongoing global dialogue to harmonize AI regulations that safeguard human rights while promoting technological advancement, reflecting the diverse perspectives and priorities of each region.
Related papers
- Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts [0.0]
ISO standards aim to foster responsible development by embedding fairness, transparency, and risk management into AI systems.<n>Their effectiveness varies across diverse regulatory landscapes, from the EU's risk-based AI Act to China's stability-focused measures.<n>This paper introduces a novel Comparative Risk-Impact Assessment Framework to evaluate how well ISO standards address ethical risks.
arXiv Detail & Related papers (2025-04-22T00:44:20Z) - Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia [0.0]
This study conducts a comparative analysis of AI trends in the United States (US), the European Union (EU), and Asia.<n>It focuses on three key dimensions: generative AI, ethical oversight, and industrial applications.<n>The US prioritizes market-driven innovation with minimal regulatory constraints, the EU enforces a precautionary risk-based framework emphasizing ethical safeguards, and Asia employs state-guided AI strategies that balance rapid deployment with regulatory oversight.
arXiv Detail & Related papers (2025-04-01T11:05:47Z) - Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy [0.0]
This article examines the ethical and legal implications of artificial intelligence (AI) driven data collection, focusing on developments from 2023 to 2024.<n>It compares regulatory approaches in the European Union, the United States, and China, highlighting the challenges in creating a globally harmonized framework for AI governance.<n>The article emphasizes the need for adaptive governance and international cooperation to address the global nature of AI development.
arXiv Detail & Related papers (2025-03-17T14:15:59Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.<n>Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation [22.476305606415995]
This paper examines AI policies in two domains, news organisations and universities, to understand how bottom-up governance approaches shape AI usage and oversight.
We identify key areas of convergence and divergence in how organisations address risks such as bias, privacy, misinformation, and accountability.
We argue that lessons from domain-specific AI policies can contribute to more adaptive and effective AI governance at the global level.
arXiv Detail & Related papers (2025-02-19T15:59:09Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template [55.2480439325792]
Article aims to fill existing gaps in the theoretical and methodological elaboration of the Fundamental Rights Impact Assessment (FRIA)
This article outlines the main building blocks of a model template for the FRIA.
It can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
arXiv Detail & Related papers (2024-11-07T11:55:55Z) - Comparative Global AI Regulation: Policy Perspectives from the EU, China, and the US [0.0]
This paper compares three distinct approaches taken by the EU, China and the US.
Within the US, we explore AI regulation at both the federal and state level, with a focus on California's pending Senate Bill 1047.
arXiv Detail & Related papers (2024-10-05T18:08:48Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - AI Alignment: A Comprehensive Survey [69.61425542486275]
AI alignment aims to make AI systems behave in line with human intentions and values.<n>We identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality.<n>We decompose current alignment research into two key components: forward alignment and backward alignment.
arXiv Detail & Related papers (2023-10-30T15:52:15Z) - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework [0.9622882291833615]
This paper proposes an alternative contextual, coherent, and commensurable (3C) framework for regulating artificial intelligence (AI)
To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models.
To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.
arXiv Detail & Related papers (2023-03-20T15:23:40Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.