Missing Value Chain in Generative AI Governance China as an example
- URL: http://arxiv.org/abs/2401.02799v1
- Date: Fri, 5 Jan 2024 13:28:25 GMT
- Title: Missing Value Chain in Generative AI Governance China as an example
- Authors: Yulu Pi
- Abstract summary: China's Provisional Administrative Measures of Generative Artificial Intelligence Services came into effect in August 2023.
Measure presents unclear distinctions regarding different roles in the value chain of Generative AI.
Lack of distinction and clear legal status between different players in the AI value chain can have profound consequences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We examined the world's first regulation on Generative AI, China's
Provisional Administrative Measures of Generative Artificial Intelligence
Services, which came into effect in August 2023. Our assessment reveals that
the Measures, while recognizing the technical advances of generative AI and
seeking to govern its full life cycle, presents unclear distinctions regarding
different roles in the value chain of Generative AI including upstream
foundation model providers and downstream deployers. The lack of distinction
and clear legal status between different players in the AI value chain can have
profound consequences. It can lead to ambiguity in accountability, potentially
undermining the governance and overall success of AI services.
Related papers
- The AI Pentad, the CHARME$^{2}$D Model, and an Assessment of Current-State AI Regulation [5.231576332164012]
This paper aims to establish a unifying model for AI regulation from the perspective of core AI components.
We first introduce the AI Pentad, which comprises the five essential components of AI.
We then review AI regulatory enablers, including AI registration and disclosure, AI monitoring, and AI enforcement mechanisms.
arXiv Detail & Related papers (2025-03-08T22:58:41Z) - AI Governance InternationaL Evaluation Index (AGILE Index) [15.589972522113754]
The rapid advancement of Artificial Intelligence (AI) technology is profoundly transforming human society.
Since 2022, the extensive deployment of generative AI, particularly large language models, marked a new phase in AI governance.
As consensus on international governance continues to be established and put into action, the practical importance of conducting a global assessment of the state of AI governance is progressively coming to light.
The inaugural evaluation of the AGILE Index commences with an exploration of four foundational pillars: the development level of AI, the AI governance environment, the AI governance instruments, and the AI governance effectiveness.
arXiv Detail & Related papers (2025-02-21T10:16:56Z) - Universal AI maximizes Variational Empowerment [0.0]
We build on the existing framework of Self-AIXI -- a universal learning agent that predicts its own actions.
We argue that power-seeking tendencies of universal AI agents can be explained as an instrumental strategy to secure future reward.
Our main contribution is to show how these motivations systematically lead universal AI agents to seek and sustain high-optionality states.
arXiv Detail & Related papers (2025-02-20T02:58:44Z) - Agentic AI: Autonomy, Accountability, and the Algorithmic Society [0.2209921757303168]
Agentic Artificial Intelligence (AI) can autonomously pursue long-term goals, make decisions, and execute complex, multi-turn.
This transition from advisory roles to proactive execution challenges established legal, economic, and creative frameworks.
We explore challenges in three interrelated domains: creativity and intellectual property, legal and ethical considerations, and competitive effects.
arXiv Detail & Related papers (2025-02-01T03:14:59Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - AI Governance and Accountability: An Analysis of Anthropic's Claude [0.0]
This paper examines the AI governance landscape, focusing on Anthropic's Claude, a foundational AI model.
We analyze Claude through the lens of the NIST AI Risk Management Framework and the EU AI Act, identifying potential threats and proposing mitigation strategies.
arXiv Detail & Related papers (2024-05-02T23:37:06Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The European AI Liability Directives -- Critique of a Half-Hearted
Approach and Lessons for the Future [0.0]
The European Commission advanced two proposals outlining the European approach to AI liability in September 2022.
The latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment.
Taken together, these acts may well trigger a Brussels Effect in AI regulation, with significant consequences for the US and beyond.
I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime.
arXiv Detail & Related papers (2022-11-25T09:08:11Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.