Learning Minimal Representations of Fermionic Ground States
- URL: http://arxiv.org/abs/2512.11767v1
- Date: Fri, 12 Dec 2025 18:26:05 GMT
- Title: Learning Minimal Representations of Fermionic Ground States
- Authors: Felix Frohnert, Emiel Koridon, Stefano Polla,
- Abstract summary: We introduce an unsupervised machine-learning framework that discovers optimally compressed representations of quantum many-body ground states.<n>We identify minimal latent spaces with a sharp reconstruction quality threshold at $L-1$ latent dimensions, matching the system's intrinsic degrees of freedom.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an unsupervised machine-learning framework that discovers optimally compressed representations of quantum many-body ground states. Using an autoencoder neural network architecture on data from $L$-site Fermi-Hubbard models, we identify minimal latent spaces with a sharp reconstruction quality threshold at $L-1$ latent dimensions, matching the system's intrinsic degrees of freedom. We demonstrate the use of the trained decoder as a differentiable variational ansatz to minimize energy directly within the latent space. Crucially, this approach circumvents the $N$-representability problem, as the learned manifold implicitly restricts the optimization to physically valid quantum states.
Related papers
- Projective Kolmogorov Arnold Neural Networks (P-KANs): Entropy-Driven Functional Space Discovery for Interpretable Machine Learning [0.0]
Kolmogorov-Arnold Networks (KANs) relocate learnable nonlinearities from nodes to edges.<n>Current KANs suffer from fundamental inefficiencies due to redundancy in high-dimensional spline parameter spaces.<n>We introduce Projective Kolmogorov-Arnold Networks (P-KANs), a novel training framework that guides edge function discovery.
arXiv Detail & Related papers (2025-09-24T12:15:37Z) - Deep Hierarchical Learning with Nested Subspace Networks [53.71337604556311]
We propose Nested Subspace Networks (NSNs) for large neural networks.<n>NSNs enable a single model to be dynamically and granularly adjusted across a continuous spectrum of compute budgets.<n>We show that NSNs can be surgically applied to pre-trained LLMs and unlock a smooth and predictable compute-performance frontier.
arXiv Detail & Related papers (2025-09-22T15:13:14Z) - Physics-Informed Graph Neural Networks for Transverse Momentum Estimation in CMS Trigger Systems [0.0]
Real-time particle transverse momentum ($p_T$) estimation in high-energy physics demands efficient algorithms under strict hardware constraints.<n>We propose a physics-informed graph neural network (GNN) framework that systematically encodes detector geometry and physical observables.<n>Our co-design methodology yields superior accuracy-efficiency trade-offs compared to existing baselines.
arXiv Detail & Related papers (2025-07-25T12:19:57Z) - Lightweight Federated Learning over Wireless Edge Networks [83.4818741890634]
Federated (FL) is an alternative at network edge, but an alternative in wireless networks.<n>We derive a closed-form expression FL convergence gap transmission power, model pruning error, and quantization.<n> LTFL outperforms state-the-art schemes in experiments on real-world datasets.
arXiv Detail & Related papers (2025-07-13T09:14:17Z) - ACMamba: Fast Unsupervised Anomaly Detection via An Asymmetrical Consensus State Space Model [51.83639270669481]
Unsupervised anomaly detection in hyperspectral images (HSI) aims to detect unknown targets from backgrounds.<n>HSI studies are hindered by steep computational costs due to the high-dimensional property of HSI and dense sampling-based training paradigm.<n>We propose an Asymmetrical Consensus State Space Model (ACMamba) to significantly reduce computational costs without compromising accuracy.
arXiv Detail & Related papers (2025-04-16T05:33:42Z) - Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$P Parametrization [66.03821840425539]
In this paper, we investigate the training dynamics of $L$-layer neural networks using the tensor gradient program (SGD) framework.<n>We show that SGD enables these networks to learn linearly independent features that substantially deviate from their initial values.<n>This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum.
arXiv Detail & Related papers (2025-03-12T17:33:13Z) - Automated quantum system modeling with machine learning [0.0]
We show that a machine learning algorithm is able to construct quantum models, given a straightforward set of quantum dynamics measurements.
We demonstrate through simulations of a Markovian open quantum system that a neural network can automatically detect the number $N $ of effective states.
arXiv Detail & Related papers (2024-09-27T15:18:20Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach [47.19265172105025]
We propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN)
egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required.
The architecture is based on a novel mapping from real-world data to Hilbert space.
arXiv Detail & Related papers (2022-01-13T16:35:45Z) - Efficient and Flexible Approach to Simulate Low-Dimensional Quantum
Lattice Models with Large Local Hilbert Spaces [0.08594140167290096]
We introduce a mapping that allows to construct artificial $U(1)$ symmetries for any type of lattice model.
Exploiting the generated symmetries, numerical expenses that are related to the local degrees of freedom decrease significantly.
Our findings motivate an intuitive physical picture of the truncations occurring in typical algorithms.
arXiv Detail & Related papers (2020-08-19T14:13:56Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.