Should AI Become an Intergenerational Civil Right?
- URL: http://arxiv.org/abs/2512.11892v1
- Date: Tue, 09 Dec 2025 20:22:16 GMT
- Title: Should AI Become an Intergenerational Civil Right?
- Authors: Jon Crowcroft, Rute C. Sofia, Dirk Trossen, Vassilis Tsaoussidis,
- Abstract summary: We argue that access to AI should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection.<n>We propose recognizing access to AI as an emphIntergenerational Civil Right, establishing a legal and ethical framework that safeguards present-day inclusion and the rights of future generations.
- Score: 2.7937298764423573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) is rapidly becoming a foundational layer of social, economic, and cognitive infrastructure. At the same time, the training and large-scale deployment of AI systems rely on finite and unevenly distributed energy, networking, and computational resources. This tension exposes a largely unexamined problem in current AI governance: while expanding access to AI is essential for social inclusion and equal opportunity, unconstrained growth in AI use risks unsustainable resource consumption, whereas restricting access threatens to entrench inequality and undermine basic rights. This paper argues that access to AI outputs largely derived from publicly produced knowledge should not be treated solely as a commercial service, but as a fundamental civil interest requiring explicit protection. We show that existing regulatory frameworks largely ignore the coupling between equitable access and resource constraints, leaving critical questions of fairness, sustainability, and long-term societal impact unresolved. To address this gap, we propose recognizing access to AI as an \emph{Intergenerational Civil Right}, establishing a legal and ethical framework that simultaneously safeguards present-day inclusion and the rights of future generations. Beyond normative analysis, we explore how this principle can be technically realized. Drawing on emerging paradigms in IoT--Edge--Cloud computing, decentralized inference, and energy-aware networking, we outline technological trajectories and a strawman architecture for AI Delivery Networks that support equitable access under strict resource constraints. By framing AI as a shared social infrastructure rather than a discretionary market commodity, this work connects governance principles with concrete system design choices, offering a pathway toward AI deployment that is both socially just and environmentally sustainable.
Related papers
- AI+HW 2035: Shaping the Next Decade [135.53570243498987]
Artificial intelligence (AI) and hardware (HW) are advancing at unprecedented rates, yet their trajectories have become inseparably intertwined.<n>This vision paper lays out a 10-year roadmap for AI+HW co-design and co-development, spanning algorithms, architectures, systems, and sustainability.<n>We identify key challenges and opportunities, candidly assess potential obstacles and pitfalls, and propose integrated solutions.
arXiv Detail & Related papers (2026-03-05T14:36:33Z) - The economic alignment problem of artificial intelligence [0.0]
We argue that developing advanced AI inside a growth-based system is likely to increase social, environmental, and existential risks.<n>We show that post-growth research offers concepts and policies that could substantially reduce AI risks.
arXiv Detail & Related papers (2026-02-25T12:22:46Z) - The AI Pyramid A Conceptual Framework for Workforce Capability in the Age of AI [2.134211474877041]
Recent evidence shows that generative AI disproportionately affects highly educated, white collar work.<n>This paper proposes the AI Pyramid, a conceptual framework for organizing human capability in an AI mediated economy.<n>The framework has implications for organizations, education systems, and governments seeking to align learning, measurement, and policy with the evolving demands of AI mediated work.
arXiv Detail & Related papers (2026-01-10T09:27:56Z) - Openness in AI and downstream governance: A global value chain approach [0.0]
Openness in AI highlights an emerging ecosystem of open AI models, datasets and toolchains.<n>It poses questions as to whether open resources can support technological transfer and the ability for catch-up, even in the face of AI industry power.<n>This work extends previous mapping of AI value chains to build a framework which links foundational AI with downstream value chains.
arXiv Detail & Related papers (2025-09-12T13:12:09Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Climate And Resource Awareness is Imperative to Achieving Sustainable AI (and Preventing a Global AI Arms Race) [6.570828098873743]
We argue that reconciling climate and resource awareness is essential to realizing the full potential of sustainable AI.<n>We introduce the Climate and Resource Aware Machine Learning (CARAML) framework to address this conflict.
arXiv Detail & Related papers (2025-02-27T11:54:10Z) - Decentralized Governance of Autonomous AI Agents [0.0]
ETHOS is a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs)<n>It establishes a global registry for AI agents, enabling dynamic risk classification, proportional oversight, and automated compliance monitoring.<n>By integrating philosophical principles of rationality, ethical grounding, and goal alignment, ETHOS aims to create a robust research agenda for promoting trust, transparency, and participatory governance.
arXiv Detail & Related papers (2024-12-22T18:01:49Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI Governance and Ethics Framework for Sustainable AI and Sustainability [0.0]
There are many emerging AI risks for humanity, such as autonomous weapons, automation-spurred job loss, socio-economic inequality, bias caused by data and algorithms, privacy violations and deepfakes.
Social diversity, equity and inclusion are considered key success factors of AI to mitigate risks, create values and drive social justice.
In our journey towards an AI-enabled sustainable future, we need to address AI ethics and governance as a priority.
arXiv Detail & Related papers (2022-09-28T22:23:10Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.