Openness in AI and downstream governance: A global value chain approach
- URL: http://arxiv.org/abs/2509.10220v1
- Date: Fri, 12 Sep 2025 13:12:09 GMT
- Title: Openness in AI and downstream governance: A global value chain approach
- Authors: Christopher Foster,
- Abstract summary: Openness in AI highlights an emerging ecosystem of open AI models, datasets and toolchains.<n>It poses questions as to whether open resources can support technological transfer and the ability for catch-up, even in the face of AI industry power.<n>This work extends previous mapping of AI value chains to build a framework which links foundational AI with downstream value chains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The rise of AI has been rapid, becoming a leading sector for investment and promising disruptive impacts across the economy. Within the critical analysis of the economic impacts, AI has been aligned to the critical literature on data power and platform capitalism - further concentrating power and value capture amongst a small number of "big tech" leaders. The equally rapid rise of openness in AI (here taken to be claims made by AI firms about openness, "open source" and free provision) signals an interesting development. It highlights an emerging ecosystem of open AI models, datasets and toolchains, involving massive capital investment. It poses questions as to whether open resources can support technological transfer and the ability for catch-up, even in the face of AI industry power. This work seeks to add conceptual clarity to these debates by conceptualising openness in AI as a unique type of interfirm relation and therefore amenable to value chain analysis. This approach then allows consideration of the capitalist dynamics of "outsourcing" of foundational firms in value chains, and consequently the types of governance and control that might emerge downstream as AI is adopted. This work, therefore, extends previous mapping of AI value chains to build a framework which links foundational AI with downstream value chains. Overall, this work extends our understanding of AI as a productive sector. While the work remains critical of the power of leading AI firms, openness in AI may lead to potential spillovers stemming from the intense competition for global technological leadership in AI.
Related papers
- AI+HW 2035: Shaping the Next Decade [135.53570243498987]
Artificial intelligence (AI) and hardware (HW) are advancing at unprecedented rates, yet their trajectories have become inseparably intertwined.<n>This vision paper lays out a 10-year roadmap for AI+HW co-design and co-development, spanning algorithms, architectures, systems, and sustainability.<n>We identify key challenges and opportunities, candidly assess potential obstacles and pitfalls, and propose integrated solutions.
arXiv Detail & Related papers (2026-03-05T14:36:33Z) - Should AI Become an Intergenerational Civil Right? [2.7937298764423573]
We argue that access to AI should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection.<n>We propose recognizing access to AI as an emphIntergenerational Civil Right, establishing a legal and ethical framework that safeguards present-day inclusion and the rights of future generations.
arXiv Detail & Related papers (2025-12-09T20:22:16Z) - AI-Based Crypto Tokens: The Illusion of Decentralized AI? [0.10878040851637999]
AI-tokens are cryptographic assets designed to power decentralized AI platforms and services.<n>This paper provides a comprehensive review of leading AI-token projects.<n>We assess the extent to which they offer value beyond traditional centralized AI services.
arXiv Detail & Related papers (2025-04-29T13:44:33Z) - Overview of AI and Communication for 6G Network: Fundamentals, Challenges, and Future Research Opportunities [148.601430677814]
This paper presents a comprehensive overview of AI and communication for 6G networks.<n>We first review the driving factors behind incorporating AI into wireless communications, as well as the vision for the convergence of AI and 6G.<n>The discourse then transitions to a detailed exposition of the envisioned integration of AI within 6G networks.
arXiv Detail & Related papers (2024-12-19T05:36:34Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Is Decentralized AI Safer? [0.0]
Various groups are building open AI systems, investigating their risks, and discussing their ethics.
In this paper, we demonstrate how blockchain technology can facilitate and formalize these efforts.
We argue that decentralizing AI can help mitigate AI risks and ethical concerns, while also introducing new issues that should be considered in future work.
arXiv Detail & Related papers (2022-11-04T01:01:31Z) - AI Governance and Ethics Framework for Sustainable AI and Sustainability [0.0]
There are many emerging AI risks for humanity, such as autonomous weapons, automation-spurred job loss, socio-economic inequality, bias caused by data and algorithms, privacy violations and deepfakes.
Social diversity, equity and inclusion are considered key success factors of AI to mitigate risks, create values and drive social justice.
In our journey towards an AI-enabled sustainable future, we need to address AI ethics and governance as a priority.
arXiv Detail & Related papers (2022-09-28T22:23:10Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.