Entropy Collapse: A Universal Failure Mode of Intelligent Systems
- URL: http://arxiv.org/abs/2512.12381v1
- Date: Sat, 13 Dec 2025 16:12:27 GMT
- Title: Entropy Collapse: A Universal Failure Mode of Intelligent Systems
- Authors: Truong Xuan Khanh, Truong Quynh Hoa,
- Abstract summary: We show that intelligent systems undergo a sharp transition from high-entropy adaptive regimes to low-entropy collapsed regimes.<n>We analytically establish critical thresholds, dynamical irreversibility, and attractor structure.<n>This framework unifies diverse phenomena -- model collapse in AI, institutional sclerosis in economics, and genetic bottlenecks in evolution.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent systems are widely assumed to improve through learning, coordination, and optimization. However, across domains -- from artificial intelligence to economic institutions and biological evolution -- increasing intelligence often precipitates paradoxical degradation: systems become rigid, lose adaptability, and fail unexpectedly. We identify \emph{entropy collapse} as a universal dynamical failure mode arising when feedback amplification outpaces bounded novelty regeneration. Under minimal domain-agnostic assumptions, we show that intelligent systems undergo a sharp transition from high-entropy adaptive regimes to low-entropy collapsed regimes. Collapse is formalized as convergence toward a stable low-entropy manifold, not a zero-entropy state, implying a contraction of effective adaptive dimensionality rather than loss of activity or scale. We analytically establish critical thresholds, dynamical irreversibility, and attractor structure and demonstrate universality across update mechanisms through minimal simulations. This framework unifies diverse phenomena -- model collapse in AI, institutional sclerosis in economics, and genetic bottlenecks in evolution -- as manifestations of the same underlying process. By reframing collapse as a structural cost of intelligence, our results clarify why late-stage interventions systematically fail and motivate entropy-aware design principles for sustaining long-term adaptability in intelligent systems. \noindent\textbf{Keywords:} entropy collapse; intelligent systems; feedback amplification; phase transitions; effective dimensionality; complex systems; model collapse; institutional sclerosis
Related papers
- Evolutionary Systems Thinking -- From Equilibrium Models to Open-Ended Adaptive Dynamics [1.2691047660244335]
Complex change is often described as "evolutionary" in economics, policy, numerically and technology.<n>This paper argues that evolutionary dynamics should be treated as a core system-thinking problem rather than as a biological metaphor.
arXiv Detail & Related papers (2026-02-17T19:17:50Z) - The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies [57.387081435669835]
Multi-agent systems built from large language models offer a promising paradigm for scalable collective intelligence and self-evolution.<n>We show that an agent society satisfying continuous self-evolution, complete isolation, and safety invariance is impossible.<n>We propose several solution directions to alleviate the identified safety concern.
arXiv Detail & Related papers (2026-02-10T15:18:19Z) - Logic-Driven Semantic Communication for Resilient Multi-Agent Systems [26.964933264412412]
6G networks are accelerating autonomy and intelligence in large-scale, decentralized multi-agent systems.<n>This article proposes a formal definition of MAS resilience grounded in two complementary dimensions.<n>We design an agent architecture and develop decentralized algorithms to achieve both epistemic and action resilience.
arXiv Detail & Related papers (2026-01-11T00:54:09Z) - Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits [72.0643009153473]
We show that expressive variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size.<n>As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions.<n>We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class.
arXiv Detail & Related papers (2026-01-05T08:04:33Z) - Dynamic Feedback Engines: Layer-Wise Control for Self-Regulating Continual Learning [55.854208296248714]
We propose an entropy-aware continual learning method that employs a dynamic feedback mechanism to regulate each layer based on its entropy.<n>Our approach reduces entropy in high-entropy layers to mitigate underfitting and increases entropy in overly confident layers to alleviate overfitting.<n> Experiments on various datasets demonstrate substantial performance gains over state-of-the-art continual learning baselines.
arXiv Detail & Related papers (2025-12-25T17:27:43Z) - The Red Queen's Trap: Limits of Deep Evolution in High-Frequency Trading [1.9290392443571385]
"Galaxy Empire" is a hybrid framework coupling LSTM/Transformer-based perception with a genetic "Time-is-Life" survival mechanism.<n>We observed a catastrophic divergence between training metrics and live performance.<n>Our findings provide empirical evidence that increasing asymmetry in the absence of information exacerbates systemic fragility.
arXiv Detail & Related papers (2025-12-05T19:30:26Z) - Rethinking Entropy Interventions in RLVR: An Entropy Change Perspective [11.65148836911294]
entropy collapse is a rapid loss of policy diversity, stemming from the exploration-exploitation imbalance and leading to a lack of generalization.<n>Recent entropy-intervention methods aim to prevent coloredtextentropy collapse, yet their underlying mechanisms remain unclear.<n>We introduce an entropy-change-aware reweighting scheme, namely Stabilizing Token-level Entropy-changE via Reweighting (STEER)
arXiv Detail & Related papers (2025-10-11T10:17:38Z) - Neural Thermodynamics: Entropic Forces in Deep and Universal Representation Learning [12.77092262246859]
We propose a rigorous entropic-force theory for understanding the learning dynamics of neural networks trained with gradient descent.<n>We show that representation learning is crucially governed by emergent entropic forces arising from singularity and discrete-time updates.<n>These forces systematically break continuous parameter symmetries and preserve discrete ones, leading to a series of gradient balance phenomena.
arXiv Detail & Related papers (2025-05-18T12:25:42Z) - Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics [0.0]
We show how agentic graph reasoning systems spontaneously evolve toward a critical state that sustains continuous semantic discovery.<n>We identify a subtle yet robust regime in which semantic entropy dominates over structural entropy.<n>Our findings provide practical strategies for engineering intelligent systems with intrinsic capacities for long-term discovery and adaptation.
arXiv Detail & Related papers (2025-03-24T16:30:37Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Stabilizing Transformer Training by Preventing Attention Entropy
Collapse [56.45313891694746]
We investigate the training dynamics of Transformers by examining the evolution of the attention layers.
We show that $sigma$Reparam successfully prevents entropy collapse in the attention layers, promoting more stable training.
We conduct experiments with $sigma$Reparam on image classification, image self-supervised learning, machine translation, speech recognition, and language modeling tasks.
arXiv Detail & Related papers (2023-03-11T03:30:47Z) - Dynamics with autoregressive neural quantum states: application to
critical quench dynamics [41.94295877935867]
We present an alternative general scheme that enables one to capture long-time dynamics of quantum systems in a stable fashion.
We apply the scheme to time-dependent quench dynamics by investigating the Kibble-Zurek mechanism in the two-dimensional quantum Ising model.
arXiv Detail & Related papers (2022-09-07T15:50:00Z) - Sensing quantum chaos through the non-unitary geometric phase [62.997667081978825]
We propose a decoherent mechanism for sensing quantum chaos.
The chaotic nature of a many-body quantum system is sensed by studying the implications that the system produces in the long-time dynamics of a probe coupled to it.
arXiv Detail & Related papers (2021-04-13T17:24:08Z) - Action Redundancy in Reinforcement Learning [54.291331971813364]
We show that transition entropy can be described by two terms; namely, model-dependent transition entropy and action redundancy.
Our results suggest that action redundancy is a fundamental problem in reinforcement learning.
arXiv Detail & Related papers (2021-02-22T19:47:26Z) - Entanglement revivals as a probe of scrambling in finite quantum systems [0.0]
We show that for integrable systems the height of the dip of the entanglement of an interval of fixed length decays as a power law with the total system size.
While for integrable systems the height of the dip of the entanglement of an interval of fixed length decays as a power law with the total system size, upon breaking integrability a much faster decay is observed, signalling strong scrambling.
arXiv Detail & Related papers (2020-04-18T21:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.