The Economics of Information Pollution in the Age of AI: A General Equilibrium Approach to Welfare, Measurement, and Policy
- URL: http://arxiv.org/abs/2509.13729v1
- Date: Wed, 17 Sep 2025 06:31:17 GMT
- Title: The Economics of Information Pollution in the Age of AI: A General Equilibrium Approach to Welfare, Measurement, and Policy
- Authors: Yukun Zhang, Tianyang Zhang,
- Abstract summary: The advent of Large Language Models (LLMs) represents a fundamental shock to the economics of information production.<n>By asymmetrically collapsing the marginal cost of generating low-quality, synthetic content while leaving high-quality production costly, AI systematically incentivizes information pollution.<n>This paper develops a general equilibrium framework to analyze this challenge.
- Score: 4.887749221165767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of Large Language Models (LLMs) represents a fundamental shock to the economics of information production. By asymmetrically collapsing the marginal cost of generating low-quality, synthetic content while leaving high-quality production costly, AI systematically incentivizes information pollution. This paper develops a general equilibrium framework to analyze this challenge. We model the strategic interactions among a monopolistic platform, profit-maximizing producers, and utility-maximizing consumers in a three-stage game. The core of our model is a production technology with differential elasticities of substitution ($\sigma_L > 1 > \sigma_H$), which formalizes the insight that AI is a substitute for labor in low-quality production but a complement in high-quality creation. We prove the existence of a unique "Polluted Information Equilibrium" and demonstrate its inefficiency, which is driven by a threefold market failure: a production externality, a platform governance failure, and an information commons externality. Methodologically, we derive a theoretically-grounded Information Pollution Index (IPI) with endogenous welfare weights to measure ecosystem health. From a policy perspective, we show that a first-best outcome requires a portfolio of instruments targeting each failure. Finally, considering the challenges of deep uncertainty, we advocate for an adaptive governance framework where policy instruments are dynamically adjusted based on real-time IPI readings, offering a robust blueprint for regulating information markets in the age of AI.
Related papers
- Generative AI as a Non-Convex Supply Shock: Market Bifurcation and Welfare Analysis [4.887749221165767]
We show how the GenAI cost shock reshapes the market into exit AI, and human segments, generating a -class hollow'' output.<n>We conclude that optimal governance must be toward laissez-faire congestion management.
arXiv Detail & Related papers (2026-01-18T17:00:40Z) - A Sustainable AI Economy Needs Data Deals That Work for Generators [56.949279542190084]
We argue that the machine learning value chain is structurally unsustainable due to an economic data processing inequality.<n>We analyze 73 public data deals and show that the majority of value accrues to aggregators.<n>We propose an Equitable Data-Value Exchange Framework to enable a minimal market that benefits all participants.
arXiv Detail & Related papers (2026-01-15T01:05:48Z) - Governance of Technological Transition: A Predator-Prey Analysis of AI Capital in China's Economy and Its Policy Implications [7.7994612323406765]
The rapid integration of Artificial Intelligence into China's economy presents a classic governance challenge.<n>This study addresses this policy dilemma by modeling the dynamic interactions between AI capital, physical capital, and labor.<n>Our results reveal a consistent pattern where AI capital acts as the 'prey', stimulating both physical capital accumulation and labor compensation (wage bill)<n>The sensitivity analysis shows that the labor market equilibrium is overwhelmingly driven by AI-related parameters.
arXiv Detail & Related papers (2026-01-07T03:30:46Z) - Exploring Vulnerability in AI Industry [0.0]
Foundation Models (FMs) have achieved massive public adoption, fueling a turbulent market shaped by platform economics and intense investment.<n>This paper proposes a synthetic AI Vulnerability Index (AIVI) focusing on the upstream value chain for FM production, prioritizing publicly available data.<n>We model FM output as a function of five inputs: Compute, Data, Talent, Capital, and Energy, hypothesizing that supply vulnerability in any input threatens the industry.<n>Despite limitations and room for improvement, this preliminary index aims to quantify systemic risks in AI's core production engine, and implicitly shed a light on the risks for downstream value chain.
arXiv Detail & Related papers (2025-10-27T15:26:40Z) - AI Product Value Assessment Model: An Interdisciplinary Integration Based on Information Theory, Economics, and Psychology [5.57756598733474]
This paper develops a multi-dimensional evaluation model that integrates information theory's entropy reduction principle, economics' bounded rationality framework, and psychology's irrational decision theories to quantify AI product value.<n>A non-linear formula captures factor couplings, and validation through 10 commercial cases demonstrates the model's effectiveness in distinguishing successful and failed products.
arXiv Detail & Related papers (2025-08-22T15:51:14Z) - Cost-Optimal Active AI Model Evaluation [71.2069549142394]
Development of generative AI systems requires continual evaluation, data acquisition, and annotation.<n>We develop novel, cost-aware methods for actively balancing the use of a cheap, but often inaccurate, weak rater.<n>We derive a family of cost-optimal policies for allocating a given annotation budget between weak and strong raters.
arXiv Detail & Related papers (2025-06-09T17:14:41Z) - Predicting Liquidity-Aware Bond Yields using Causal GANs and Deep Reinforcement Learning with LLM Evaluation [0.0]
We generate high-fidelity synthetic bond yield data for four major bond categories (AAA, BAA, US10Y,)<n>We employ a finetuned Large Language Model (LLM) Qwen2.5-7B that generates trading signals, risk assessments, and volatility projections.<n>The reinforcement learning-enhanced synthetic data generation achieves the least Mean Absolute Error of 0.103, demonstrating its effectiveness in replicating real-world bond market dynamics.
arXiv Detail & Related papers (2025-02-24T09:46:37Z) - Modelling of Economic Implications of Bias in AI-Powered Health Emergency Response Systems [0.0]
We analyze how algorithmic bias affects resource allocation, health outcomes, and social welfare.
We propose mitigation strategies, including fairness-constrained optimization, algorithmic adjustments, and policy interventions.
arXiv Detail & Related papers (2024-10-26T17:11:23Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - On the Opportunities and Risks of Foundation Models [256.61956234436553]
We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration.
arXiv Detail & Related papers (2021-08-16T17:50:08Z) - Building a Foundation for Data-Driven, Interpretable, and Robust Policy
Design using the AI Economist [67.08543240320756]
We show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning and data-driven simulations.
We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes.
arXiv Detail & Related papers (2021-08-06T01:30:41Z) - Solving Heterogeneous General Equilibrium Economic Models with Deep
Reinforcement Learning [0.0]
General equilibrium macroeconomic models are a core tool used by policymakers to understand a nation's economy.
We use techniques from reinforcement learning to solve such models in a way that is simple, heterogeneous, and computationally efficient.
arXiv Detail & Related papers (2021-03-31T10:55:10Z) - Supercharging Imbalanced Data Learning With Energy-based Contrastive
Representation Transfer [72.5190560787569]
In computer vision, learning from long tailed datasets is a recurring theme, especially for natural image datasets.
Our proposal posits a meta-distributional scenario, where the data generating mechanism is invariant across the label-conditional feature distributions.
This allows us to leverage a causal data inflation procedure to enlarge the representation of minority classes.
arXiv Detail & Related papers (2020-11-25T00:13:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.