Variance reduction in output from generative AI
- URL: http://arxiv.org/abs/2503.01033v1
- Date: Sun, 02 Mar 2025 21:34:10 GMT
- Title: Variance reduction in output from generative AI
- Authors: Yu Xie, Yueqi Xie,
- Abstract summary: We demonstrate that generative AI models are inherently prone to the phenomenon of "regression toward the mean"<n>We discuss potential social implications of this phenomenon across three levels-societal, group, and individual-and two dimensions-material and non-material.
- Score: 11.248899695350323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI models, such as ChatGPT, will increasingly replace humans in producing output for a variety of important tasks. While much prior work has mostly focused on the improvement in the average performance of generative AI models relative to humans' performance, much less attention has been paid to the significant reduction of variance in output produced by generative AI models. In this Perspective, we demonstrate that generative AI models are inherently prone to the phenomenon of "regression toward the mean" whereby variance in output tends to shrink relative to that in real-world distributions. We discuss potential social implications of this phenomenon across three levels-societal, group, and individual-and two dimensions-material and non-material. Finally, we discuss interventions to mitigate negative effects, considering the roles of both service providers and users. Overall, this Perspective aims to raise awareness of the importance of output variance in generative AI and to foster collaborative efforts to meet the challenges posed by the reduction of variance in output generated by AI models.
Related papers
- Group Selection as a Safeguard Against AI Substitution [0.28029990367346164]
Reliance on generative AI can reduce cultural variance and diversity, especially in creative work.<n>This reduction in variance has already led to problems in model performance, including model collapse and hallucination.<n>Using an agent-based model and evolutionary game theory, we compare two types of AI use: complement and substitute.
arXiv Detail & Related papers (2026-02-03T13:56:47Z) - Diversity Has Always Been There in Your Visual Autoregressive Models [78.27363151940996]
Visual Autoregressive ( VAR) models have recently garnered significant attention for their innovative next-scale prediction paradigm.<n>Despite their efficiency, VAR models often suffer from the diversity collapse, analogous to that observed in few-step distilled diffusion models.<n>We introduce Diverse VAR, a simple yet effective approach that restores the generative diversity of VAR models without requiring any additional training.
arXiv Detail & Related papers (2025-11-21T09:24:09Z) - Socio-Economic Model of AI Agents [6.345776306229298]
We study the impact of AI collaboration under resource constraints on aggregate social output.<n>We find that the introduction of AI agents can significantly increase aggregate social output.
arXiv Detail & Related papers (2025-09-27T11:56:48Z) - Meek Models Shall Inherit the Earth [1.9647223141071104]
The past decade has seen incredible scaling of AI systems by a few companies, leading to inequality in AI model performance.<n>This paper argues that, contrary to prevailing intuition, the diminishing returns to compute scaling will lead to a convergence of AI model capabilities.
arXiv Detail & Related papers (2025-07-10T17:10:07Z) - Generative Models in Decision Making: A Survey [63.68746774576147]
generative models can be incorporated into decision-making systems by generating trajectories that guide agents toward high-reward state-action regions or intermediate sub-goals.
This paper presents a comprehensive review of the application of generative models in decision-making tasks.
arXiv Detail & Related papers (2025-02-24T12:31:28Z) - Generative Models, Humans, Predictive Models: Who Is Worse at High-Stakes Decision Making? [10.225573060836478]
Large generative models (LMs) are already being used for decision making tasks that were previously done by predictive models or humans.<n>We put popular LMs to the test in a high-stakes decision making task: recidivism prediction.
arXiv Detail & Related papers (2024-10-20T19:00:59Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Measuring Human Contribution in AI-Assisted Content Generation [66.06040950325969]
This study raises the research question of measuring human contribution in AI-assisted content generation.<n>By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - When AI Eats Itself: On the Caveats of AI Autophagy [18.641925577551557]
The AI autophagy phenomenon suggests a future where generative AI systems may increasingly consume their own outputs without discernment.
This study examines the existing literature, delving into the consequences of AI autophagy, analyzing the associated risks, and exploring strategies to mitigate its impact.
arXiv Detail & Related papers (2024-05-15T13:50:23Z) - MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions [11.972017738888825]
We propose Model Autophagy Analysis (MONAL) for large models' self-consumption explanation.
MONAL employs two distinct autophagous loops to elucidate the suppression of human-generated information in the exchange between human and AI systems.
We evaluate the capacities of generated models as both creators and disseminators of information.
arXiv Detail & Related papers (2024-02-17T13:02:54Z) - FIMBA: Evaluating the Robustness of AI in Genomics via Feature
Importance Adversarial Attacks [0.0]
This paper demonstrates the vulnerability of AI models often utilized downstream tasks on recognized public genomics datasets.
We undermine model robustness by deploying an attack that focuses on input transformation while mimicking the real data and confusing the model decision-making.
Our empirical findings unequivocally demonstrate a decline in model performance, underscored by diminished accuracy and an upswing in false positives and false negatives.
arXiv Detail & Related papers (2024-01-19T12:04:31Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Human-AI Interactions and Societal Pitfalls [3.4471935446780355]
When working with generative artificial intelligence (AI), users may see productivity gains, but the AI-generated content may not match their preferences exactly.<n>We show that the interplay between individual-level decisions and AI training may lead to societal challenges.
arXiv Detail & Related papers (2023-09-19T09:09:59Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - Predictability and Surprise in Large Generative Models [8.055204456718576]
Large-scale pre-training has emerged as a technique for creating capable, general purpose, generative models.
In this paper, we highlight a counterintuitive property of such models and discuss the policy implications of this property.
arXiv Detail & Related papers (2022-02-15T23:21:23Z) - Variational Auto-Encoder Architectures that Excel at Causal Inference [26.731576721694648]
Estimating causal effects from observational data is critical for making many types of decisions.
One approach to address this task is to learn decomposed representations of the underlying factors of data.
In this paper, we take a generative approach that builds on the recent advances in Variational Auto-Encoders.
arXiv Detail & Related papers (2021-11-11T22:37:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.