Futurity as Infrastructure: A Techno-Philosophical Interpretation of the AI Lifecycle
- URL: http://arxiv.org/abs/2508.15680v1
- Date: Thu, 21 Aug 2025 16:00:13 GMT
- Title: Futurity as Infrastructure: A Techno-Philosophical Interpretation of the AI Lifecycle
- Authors: Mark Cote, Susana Aires,
- Abstract summary: This paper argues that a techno-philosophical reading of the EU AI Act provides insight into the long-term dynamics of data in AI systems.<n>We introduce a conceptual tool to frame the AI pipeline, spanning data, training regimes, architectures, feature stores, and transfer learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper argues that a techno-philosophical reading of the EU AI Act provides insight into the long-term dynamics of data in AI systems, specifically, how the lifecycle from ingestion to deployment generates recursive value chains that challenge existing frameworks for Responsible AI. We introduce a conceptual tool to frame the AI pipeline, spanning data, training regimes, architectures, feature stores, and transfer learning. Using cross-disciplinary methods, we develop a technically grounded and philosophically coherent analysis of regulatory blind spots. Our central claim is that what remains absent from policymaking is an account of the dynamic of becoming that underpins both the technical operation and economic logic of AI. To address this, we advance a formal reading of AI inspired by Simondonian philosophy of technology, reworking his concept of individuation to model the AI lifecycle, including the pre-individual milieu, individuation, and individuated AI. To translate these ideas, we introduce futurity: the self-reinforcing lifecycle of AI, where more data enhances performance, deepens personalisation, and expands application domains. Futurity highlights the recursively generative, non-rivalrous nature of data, underpinned by infrastructures like feature stores that enable feedback, adaptation, and temporal recursion. Our intervention foregrounds escalating power asymmetries, particularly the tech oligarchy whose infrastructures of capture, training, and deployment concentrate value and decision-making. We argue that effective regulation must address these infrastructural and temporal dynamics, and propose measures including lifecycle audits, temporal traceability, feedback accountability, recursion transparency, and a right to contest recursive reuse.
Related papers
- Federated Agentic AI for Wireless Networks: Fundamentals, Approaches, and Applications [60.721304295812445]
Federated learning (FL) has the potential to improve the overall loop of agentic AI.<n>We first summarize fundamentals of agentic AI and mainstream FL types. Then, we illustrate how each FL type can strengthen a specific component of agentic AI's loop.<n>We conduct a case study on using FRL to improve the performance of agentic AI's action decision in low-altitude wireless networks.
arXiv Detail & Related papers (2026-03-02T11:26:56Z) - Toward Carbon-Neutral Human AI: Rethinking Data, Computation, and Learning Paradigms for Sustainable Intelligence [2.7946918847372277]
This paper critiques the prevailing reliance on large-scale, static datasets and monolithic training paradigms.<n>We introduce a novel framework, Human AI, which emphasizes incremental learning, carbon-aware optimization, and human-in-the-loop collaboration.
arXiv Detail & Related papers (2025-10-27T17:02:30Z) - Edge General Intelligence Through World Models and Agentic AI: Fundamentals, Solutions, and Challenges [87.02855999212817]
Edge General Intelligence (EGI) represents a transformative evolution of edge computing, where distributed agents possess the capability to perceive, reason, and act autonomously.<n>World models act as proactive internal simulators that not only predict but also actively imagine future trajectories, reason under uncertainty, and plan multi-step actions with foresight.<n>This survey bridges the gap by offering a comprehensive analysis of how world models can empower agentic artificial intelligence (AI) systems at the edge.
arXiv Detail & Related papers (2025-08-13T07:29:40Z) - Rethinking Data Protection in the (Generative) Artificial Intelligence Era [115.71019708491386]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Contextual Memory Intelligence -- A Foundational Paradigm for Human-AI Collaboration and Reflective Generative AI Systems [0.0]
This paper introduces Contextual Memory Intelligence (CMI) as a new paradigm for building intelligent systems.<n> CMI repositions memory as an adaptive infrastructure necessary for longitudinal coherence, explainability, and responsible decision-making.<n>This enhances human-AI collaboration, generative AI design, and the resilience of the institutions.
arXiv Detail & Related papers (2025-05-28T18:59:16Z) - Responsible Data Stewardship: Generative AI and the Digital Waste Problem [0.0]
generative AI systems enable unprecedented creation levels of synthetic data across text, images, audio, and video modalities.<n>Digital waste refers to stored data that consumes resources without serving a specific (and/or immediate) purpose.<n>This paper introduces digital waste as an ethical imperative within (generative) AI development, positioning environmental sustainability as core for responsible innovation.
arXiv Detail & Related papers (2025-05-27T20:07:22Z) - The Critical Canvas--How to regain information autonomy in the AI era [11.15944540843097]
The Critical Canvas is an information exploration platform designed to restore balance between algorithmic efficiency and human agency.
The platform transforms overwhelming technical information into actionable insights.
It enables more informed decision-making and effective policy development in the age of AI.
arXiv Detail & Related papers (2024-11-25T08:46:02Z) - Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Causal Reasoning: Charting a Revolutionary Course for Next-Generation
AI-Native Wireless Networks [63.246437631458356]
Next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native.
This article introduces a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning.
We highlight several wireless networking challenges that can be addressed by causal discovery and representation.
arXiv Detail & Related papers (2023-09-23T00:05:39Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.