The Red Queen's Trap: Limits of Deep Evolution in High-Frequency Trading
- URL: http://arxiv.org/abs/2512.15732v1
- Date: Fri, 05 Dec 2025 19:30:26 GMT
- Title: The Red Queen's Trap: Limits of Deep Evolution in High-Frequency Trading
- Authors: Yijia Chen,
- Abstract summary: "Galaxy Empire" is a hybrid framework coupling LSTM/Transformer-based perception with a genetic "Time-is-Life" survival mechanism.<n>We observed a catastrophic divergence between training metrics and live performance.<n>Our findings provide empirical evidence that increasing asymmetry in the absence of information exacerbates systemic fragility.
- Score: 1.9290392443571385
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of Deep Reinforcement Learning (DRL) and Evolutionary Computation (EC) is frequently hypothesized to be the "Holy Grail" of algorithmic trading, promising systems that adapt autonomously to non-stationary market regimes. This paper presents a rigorous post-mortem analysis of "Galaxy Empire," a hybrid framework coupling LSTM/Transformer-based perception with a genetic "Time-is-Life" survival mechanism. Deploying a population of 500 autonomous agents in a high-frequency cryptocurrency environment, we observed a catastrophic divergence between training metrics (Validation APY $>300\%$) and live performance (Capital Decay $>70\%$). We deconstruct this failure through a multi-disciplinary lens, identifying three critical failure modes: the overfitting of \textit{Aleatoric Uncertainty} in low-entropy time-series, the \textit{Survivor Bias} inherent in evolutionary selection under high variance, and the mathematical impossibility of overcoming microstructure friction without order-flow data. Our findings provide empirical evidence that increasing model complexity in the absence of information asymmetry exacerbates systemic fragility.
Related papers
- Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies [57.387081435669835]
Multi-agent systems built from large language models offer a promising paradigm for scalable collective intelligence and self-evolution.<n>We show that an agent society satisfying continuous self-evolution, complete isolation, and safety invariance is impossible.<n>We propose several solution directions to alleviate the identified safety concern.
arXiv Detail & Related papers (2026-02-10T15:18:19Z) - Multi-Scale Negative Coupled Information Systems (MNCIS): A Unified Spectral Topology Framework for Stability in Turbulence, AI, and Biology [1.4213973379473657]
This work generalizes the Multi-Scale Negative Coupled Information System (MNCIS) framework.<n>Global stability requires an active topological operator -- Adaptive Spectral Negative Coupling (ASNC) -- functioning as a state-dependent high-pass filter.<n>ASNC acts as a global-enstrophy adaptive sub-grid scale (SGS) model, stabilizing the inviscid limit and preserving the Kolmogorov $-5/3$ inertial range without artificial hyper-viscosity.<n>Our results suggest that the MNCIS framework provides a base-independent topological condition for distinguishing viable complex systems from those collapsing into thermal equilibrium
arXiv Detail & Related papers (2026-01-06T21:11:33Z) - On the Limits of Self-Improving in LLMs and Why AGI, ASI and the Singularity Are Not Near Without Symbolic Model Synthesis [0.01269104766024433]
We formalise self-training in Large Language Models (LLMs) and Generative AI as a discrete-time dynamical system.<n>We derive two fundamental failure modes: (1) Entropy Decay, where finite sampling effects cause a monotonic loss of distributional diversity (mode collapse), and (2) Variance Amplification, where the loss of external grounding causes the model's representation of truth to drift as a random walk.
arXiv Detail & Related papers (2026-01-05T19:50:49Z) - Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits [72.0643009153473]
We show that expressive variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size.<n>As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions.<n>We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class.
arXiv Detail & Related papers (2026-01-05T08:04:33Z) - Entropy Collapse: A Universal Failure Mode of Intelligent Systems [0.0]
We show that intelligent systems undergo a sharp transition from high-entropy adaptive regimes to low-entropy collapsed regimes.<n>We analytically establish critical thresholds, dynamical irreversibility, and attractor structure.<n>This framework unifies diverse phenomena -- model collapse in AI, institutional sclerosis in economics, and genetic bottlenecks in evolution.
arXiv Detail & Related papers (2025-12-13T16:12:27Z) - Explainable Heterogeneous Anomaly Detection in Financial Networks via Adaptive Expert Routing [9.3237091894548]
Existing detectors treat all anomalies uniformly, producing scores without revealing which mechanism is failing.<n>We address these via adaptive graph learning with specialized expert networks that provide built-in interpretability.<n>We achieve 92.3% detection of 13 major events with 3.8-day lead time, outperforming best baseline by 30.8pp.
arXiv Detail & Related papers (2025-10-20T01:30:41Z) - Flow based approach for Dynamic Temporal Causal models with non-Gaussian or Heteroscedastic Noises [37.02662517645979]
We introduce FANTOM, a unified framework for causal discovery.<n>It handles non-stationary processes along with non-Gaussian and heteroscedastic noises.<n>It simultaneously infers the number of regimes and their corresponding indices and learns each regime's Directed Acyclic Graph.
arXiv Detail & Related papers (2025-06-20T15:12:43Z) - Obtaining continuum physics from dynamical simulations of Hamiltonian lattice gauge theories [0.0]
We introduce a new framework for rigorously controlling the impact of approximate time evolution on the continuum limit.<n>We show that, using the SBTE protocol, which prescribes driving the approximate time evolution error below the working statistical uncertainty, leads to a simplified renormalization procedure.
arXiv Detail & Related papers (2025-06-19T19:28:21Z) - Uncertainty Quantification for Forward and Inverse Problems of PDEs via
Latent Global Evolution [110.99891169486366]
We propose a method that integrates efficient and precise uncertainty quantification into a deep learning-based surrogate model.
Our method endows deep learning-based surrogate models with robust and efficient uncertainty quantification capabilities for both forward and inverse problems.
Our method excels at propagating uncertainty over extended auto-regressive rollouts, making it suitable for scenarios involving long-term predictions.
arXiv Detail & Related papers (2024-02-13T11:22:59Z) - Uncertainty-Aware Deep Attention Recurrent Neural Network for
Heterogeneous Time Series Imputation [0.25112747242081457]
Missingness is ubiquitous in multivariate time series and poses an obstacle to reliable downstream analysis.
We propose DEep Attention Recurrent Imputation (Imputation), which jointly estimates missing values and their associated uncertainty.
Experiments show that I surpasses the SOTA in diverse imputation tasks using real-world datasets.
arXiv Detail & Related papers (2024-01-04T13:21:11Z) - Capsa: A Unified Framework for Quantifying Risk in Deep Neural Networks [142.67349734180445]
Existing algorithms that provide risk-awareness to deep neural networks are complex and ad-hoc.
Here we present capsa, a framework for extending models with risk-awareness.
arXiv Detail & Related papers (2023-08-01T02:07:47Z) - Unbalanced Diffusion Schr\"odinger Bridge [71.31485908125435]
We introduce unbalanced DSBs which model the temporal evolution of marginals with arbitrary finite mass.
This is achieved by deriving the time reversal of differential equations with killing and birth terms.
We present two novel algorithmic schemes that comprise a scalable objective function for training unbalanced DSBs.
arXiv Detail & Related papers (2023-06-15T12:51:56Z) - Quantum Metric Unveils Defect Freezing in Non-Hermitian Systems [1.2289361708127877]
We study the dynamics of an exactly solvable non-Hermitian system, hosting both $mathcalPT$-symmetric and $mathcalPT$-broken modes.
In contrast to Hermitian systems, our study reveals that PT -broken time evolution leads to defect freezing and hence the violation of adiabaticity.
arXiv Detail & Related papers (2023-01-05T19:00:00Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.