Self-Organization and Spectral Mechanism of Attractor Landscapes in High-Capacity Kernel Hopfield Networks
- URL: http://arxiv.org/abs/2511.13053v4
- Date: Wed, 26 Nov 2025 04:18:43 GMT
- Title: Self-Organization and Spectral Mechanism of Attractor Landscapes in High-Capacity Kernel Hopfield Networks
- Authors: Akira Tamamori,
- Abstract summary: Kernel-based learning can dramatically increase the storage capacity of Hopfield networks.<n>We show that optimal performance is achieved by tuning the system to a spectral "Goldilocks zone" between rank collapse and diffusion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel-based learning methods can dramatically increase the storage capacity of Hopfield networks, yet the dynamical mechanism behind this enhancement remains poorly understood. We address this gap by unifying the geometric analysis of the attractor landscape with the spectral theory of kernel machines. Using a novel metric, "Pinnacle Sharpness," we first uncover a rich phase diagram of attractor stability, identifying a "Ridge of Optimization" where the network achieves maximal robustness under high-load conditions. Phenomenologically, this ridge is characterized by a "Force Antagonism," where a strong driving force is balanced by a collective feedback force. Theoretically, we reveal that this phenomenon arises from a specific reorganization of the weight spectrum, which we term \textit{Spectral Concentration}. Unlike a simple rank-1 collapse, our analysis shows that the network on the ridge self-organizes into a critical state: the leading eigenvalue is amplified to maximize global stability (Direct Force), while the trailing eigenvalues are preserved to maintain high memory capacity (Indirect Force). These findings provide a complete physical picture of how high-capacity associative memories are formed, demonstrating that optimal performance is achieved by tuning the system to a spectral "Goldilocks zone" between rank collapse and diffusion.
Related papers
- State Rank Dynamics in Linear Attention LLMs [37.607046806053035]
State Rank Stratification is characterized by a distinct spectral bifurcation among linear attention heads.<n>Low-rank heads are indispensable for model reasoning, whereas high-rank heads exhibit significant redundancy.<n>We propose Joint Rank-Norm Pruning, a zero-shot strategy that achieves a 38.9% reduction in KV-cache overhead while largely maintaining model accuracy.
arXiv Detail & Related papers (2026-02-02T15:00:42Z) - Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits [72.0643009153473]
We show that expressive variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size.<n>As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions.<n>We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class.
arXiv Detail & Related papers (2026-01-05T08:04:33Z) - Spectral Concentration at the Edge of Stability: Information Geometry of Kernel Associative Memory [0.0]
We analyze the network dynamics on a statistical manifold, revealing that the Ridge corresponds to the "Edge of Stability"<n>This unifies learning dynamics and capacity via the Minimum Description Length principle, offering a geometric theory of self-organized criticality.
arXiv Detail & Related papers (2025-11-28T11:14:15Z) - Learning by Steering the Neural Dynamics: A Statistical Mechanics Perspective [0.0]
We study how neural dynamics can support fully local, distributed learning.<n>We propose a biologically plausible algorithm for supervised learning with any binary recurrent network.
arXiv Detail & Related papers (2025-10-13T22:28:34Z) - Higher-order Network phenomena of cascading failures in resilient cities [1.6858464664111417]
We introduce a framework that confronts higher-order network theory with empirical evidence from a large-scale, real-world multimodal transport network.<n>Our findings confirm a fundamental duality: network integration enhances static robustness metrics but simultaneously creates the structural pathways for catastrophic cascades.<n>We provide strong evidence that metrics derived from the network's static blueprint-encompassing both conventional low-order centrality and novel higher-order structural analyses-are fundamentally disconnected from and thus poor predictors of a system's dynamic functional resilience.
arXiv Detail & Related papers (2025-09-17T08:22:12Z) - A Tensor Network Framework for Lindbladian Spectra and Steady States [0.0]
We introduce a framework to compute not only steady states, but also low-lying excited states with unprecedented precision for large, driven quantum many-body systems.<n>This method unlocks the capability of spectral analysis of generic open quantum many-body systems, suitable also for non-Markovian environments.
arXiv Detail & Related papers (2025-09-09T13:12:16Z) - Power Grid Control with Graph-Based Distributed Reinforcement Learning [60.49805771047161]
This work advances a graph-based distributed reinforcement learning framework for real-time, scalable grid management.<n>A Graph Neural Network (GNN) is employed to encode the network's topological information within the single low-level agent's observation.<n>Experiments on the Grid2Op simulation environment show the effectiveness of the approach.
arXiv Detail & Related papers (2025-09-02T22:17:25Z) - PowerGrow: Feasible Co-Growth of Structures and Dynamics for Power Grid Synthesis [75.14189839277928]
We present PowerGrow, a co-generative framework that significantly reduces computational overhead while maintaining operational validity.<n> Experiments across benchmark settings show that PowerGrow outperforms prior diffusion models in fidelity and diversity.<n>This demonstrates its ability to generate operationally valid and realistic power grid scenarios.
arXiv Detail & Related papers (2025-08-29T01:47:27Z) - Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning [57.3885832382455]
We show that introducing static network sparsity alone can unlock further scaling potential beyond dense counterparts with state-of-the-art architectures.<n>Our analysis reveals that, in contrast to naively scaling up dense DRL networks, such sparse networks achieve both higher parameter efficiency for network expressivity.
arXiv Detail & Related papers (2025-06-20T17:54:24Z) - Transformers Learn Faster with Semantic Focus [57.97235825738412]
We study sparse transformers in terms of learnability and generalization.<n>We find that input-dependent sparse attention models appear to converge faster and generalize better than standard attention models.
arXiv Detail & Related papers (2025-06-17T01:19:28Z) - Dissipation-Induced Threshold on Integrability Footprints [1.0159681653887238]
We study how signatures of integrability fade as dissipation strength increases.<n>We estimate the critical dissipation threshold beyond which these clusters disappear.<n>Our results provide a quantitative picture of how noise gradually erases the footprints of integrability in open quantum systems.
arXiv Detail & Related papers (2025-04-14T14:16:37Z) - A Dynamical Systems-Inspired Pruning Strategy for Addressing Oversmoothing in Graph Neural Networks [18.185834696177654]
Oversmoothing in Graph Neural Networks (GNNs) poses a significant challenge as network depth increases.<n>We identify the root causes of oversmoothing and propose textbftextitDYNAMO-GAT.<n>Our theoretical analysis reveals how DYNAMO-GAT disrupts the convergence to oversmoothed states.
arXiv Detail & Related papers (2024-12-10T07:07:06Z) - Dynamical Mean-Field Theory of Self-Attention Neural Networks [0.0]
Transformer-based models have demonstrated exceptional performance across diverse domains.
Little is known about how they operate or what are their expected dynamics.
We use methods for the study of asymmetric Hopfield networks in nonequilibrium regimes.
arXiv Detail & Related papers (2024-06-11T13:29:34Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - On the Optimization and Generalization of Multi-head Attention [28.33164313549433]
We investigate the potential optimization and generalization advantages of using multiple attention heads.
We derive convergence and generalization guarantees for gradient-descent training of a single-layer multi-head self-attention model.
arXiv Detail & Related papers (2023-10-19T12:18:24Z) - Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Spreading of a local excitation in a Quantum Hierarchical Model [62.997667081978825]
We study the dynamics of the quantum Dyson hierarchical model in its paramagnetic phase.
An initial state made by a local excitation of the paramagnetic ground state is considered.
A localization mechanism is found and the excitation remains close to its initial position at arbitrary times.
arXiv Detail & Related papers (2022-07-14T10:05:20Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - Enhancement of quantum correlations and geometric phase for a driven
bipartite quantum system in a structured environment [77.34726150561087]
We study the role of driving in an initial maximally entangled state evolving under a structured environment.
This knowledge can aid the search for physical setups that best retain quantum properties under dissipative dynamics.
arXiv Detail & Related papers (2021-03-18T21:11:37Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.