Exploring Vulnerability in AI Industry
- URL: http://arxiv.org/abs/2510.23421v1
- Date: Mon, 27 Oct 2025 15:26:40 GMT
- Title: Exploring Vulnerability in AI Industry
- Authors: Claudio Pirrone, Stefano Fricano, Gioacchino Fazio,
- Abstract summary: Foundation Models (FMs) have achieved massive public adoption, fueling a turbulent market shaped by platform economics and intense investment.<n>This paper proposes a synthetic AI Vulnerability Index (AIVI) focusing on the upstream value chain for FM production, prioritizing publicly available data.<n>We model FM output as a function of five inputs: Compute, Data, Talent, Capital, and Energy, hypothesizing that supply vulnerability in any input threatens the industry.<n>Despite limitations and room for improvement, this preliminary index aims to quantify systemic risks in AI's core production engine, and implicitly shed a light on the risks for downstream value chain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid ascent of Foundation Models (FMs), enabled by the Transformer architecture, drives the current AI ecosystem. Characterized by large-scale training and downstream adaptability, FMs (as GPT family) have achieved massive public adoption, fueling a turbulent market shaped by platform economics and intense investment. Assessing the vulnerability of this fast-evolving industry is critical yet challenging due to data limitations. This paper proposes a synthetic AI Vulnerability Index (AIVI) focusing on the upstream value chain for FM production, prioritizing publicly available data. We model FM output as a function of five inputs: Compute, Data, Talent, Capital, and Energy, hypothesizing that supply vulnerability in any input threatens the industry. Key vulnerabilities include compute concentration, data scarcity and legal risks, talent bottlenecks, capital intensity and strategic dependencies, as well as escalating energy demands. Acknowledging imperfect input substitutability, we propose a weighted geometrical average of aggregate subindexes, normalized using theoretical or empirical benchmarks. Despite limitations and room for improvement, this preliminary index aims to quantify systemic risks in AI's core production engine, and implicitly shed a light on the risks for downstream value chain.
Related papers
- Improving AI Efficiency in Data Centres by Power Dynamic Response [74.12165648170894]
The steady growth of artificial intelligence (AI) has accelerated in the recent years, facilitated by the development of sophisticated models.<n> Ensuring robust and reliable power infrastructures is fundamental to take advantage of the full potential of AI.<n>However, AI data centres are extremely hungry for power, putting the problem of their power management in the spotlight.
arXiv Detail & Related papers (2025-10-13T08:08:21Z) - Signature-Informed Transformer for Asset Allocation [9.290367832033063]
Signature-Informed Transformer is a framework that learns end-to-end allocation policies by directly optimizing a risk-aware financial objective.<n> evaluated on daily S&P 100 equity data, SIT decisively outperforms traditional and deep-learning baselines.<n>Results indicate that portfolio-aware objectives and geometry-aware inductive biases are essential for risk-aware capital allocation in machine-learning systems.
arXiv Detail & Related papers (2025-10-03T15:58:21Z) - The Economics of Information Pollution in the Age of AI: A General Equilibrium Approach to Welfare, Measurement, and Policy [4.887749221165767]
The advent of Large Language Models (LLMs) represents a fundamental shock to the economics of information production.<n>By asymmetrically collapsing the marginal cost of generating low-quality, synthetic content while leaving high-quality production costly, AI systematically incentivizes information pollution.<n>This paper develops a general equilibrium framework to analyze this challenge.
arXiv Detail & Related papers (2025-09-17T06:31:17Z) - Openness in AI and downstream governance: A global value chain approach [0.0]
Openness in AI highlights an emerging ecosystem of open AI models, datasets and toolchains.<n>It poses questions as to whether open resources can support technological transfer and the ability for catch-up, even in the face of AI industry power.<n>This work extends previous mapping of AI value chains to build a framework which links foundational AI with downstream value chains.
arXiv Detail & Related papers (2025-09-12T13:12:09Z) - Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction [69.38041171537573]
Water quality is foundational to environmental sustainability, ecosystem resilience, and public health.<n>Deep learning offers transformative potential for large-scale water quality prediction and scientific insights generation.<n>Their widespread adoption in high-stakes operational decision-making, such as pollution mitigation and equitable resource allocation, is prevented by unresolved trustworthiness challenges.
arXiv Detail & Related papers (2025-03-13T01:50:50Z) - Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation [0.0]
We generate high-fidelity synthetic bond yield data for four major bond categories (AAA, BAA, US10Y,)<n>We employ a finetuned Large Language Model (LLM) Qwen2.5-7B that generates trading signals, risk assessments, and volatility projections.<n>The reinforcement learning-enhanced synthetic data generation achieves the least Mean Absolute Error of 0.103, demonstrating its effectiveness in replicating real-world bond market dynamics.
arXiv Detail & Related papers (2025-02-24T09:46:37Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Towards Robust Stability Prediction in Smart Grids: GAN-based Approach under Data Constraints and Adversarial Challenges [53.2306792009435]
This paper introduces a novel framework for detecting instability in smart grids using only stable data.<n>It achieves up to 98.1% accuracy in predicting grid stability and 98.9% in detecting adversarial attacks.<n>Implemented on a single-board computer, it enables real-time decision-making with an average response time of under 7ms.
arXiv Detail & Related papers (2025-01-27T20:48:25Z) - Credit Risk Identification in Supply Chains Using Generative Adversarial Networks [11.125130091872046]
This study explores the application of Generative Adversarial Networks (GANs) to enhance credit risk identification in supply chains.<n>GANs enable the generation of synthetic credit risk scenarios, addressing challenges related to data scarcity and imbalanced datasets.<n>By leveraging GAN-generated data, the model improves predictive accuracy while effectively capturing dynamic and temporal dependencies in supply chain data.
arXiv Detail & Related papers (2025-01-17T18:42:46Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.