Projective purification of correlated reduced density matrices
- URL: http://arxiv.org/abs/2412.13566v1
- Date: Wed, 18 Dec 2024 07:33:51 GMT
- Title: Projective purification of correlated reduced density matrices
- Authors: Elias Pescoller, Marie Eder, Iva Březinová,
- Abstract summary: We present an algorithm capable of performing all of the following tasks in the least invasive manner.<n>We demonstrate the superiority of the present purification algorithm over previous ones in the context of the Fermi-Hubbard model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the search for accurate approximate solutions of the many-body Schr\"odinger equation, reduced density matrices play an important role, as they allow to formulate approximate methods with polynomial scaling in the number of particles. However, these methods frequently encounter the issue of $N$-representability, whereby in self-consistent applications of the methods, the reduced density matrices become unphysical. A number of algorithms have been proposed in the past to restore a given set of $N$-representability conditions once the reduced density matrices become defective. However, these purification algorithms have either ignored symmetries of the Hamiltonian related to conserved quantities, or have not incorporated them in an efficient way, thereby modifying the reduced density matrix to a greater extent than is necessary. In this paper, we present an algorithm capable of efficiently performing all of the following tasks in the least invasive manner: restoring a given set of $N$-representability conditions, maintaining contraction consistency between successive orders of reduced density matrices, and preserving all conserved quantities. We demonstrate the superiority of the present purification algorithm over previous ones in the context of the time-dependent two-particle reduced density matrix method applied to the quench dynamics of the Fermi-Hubbard model.
Related papers
- Determining the ensemble N-representability of Reduced Density Matrices [0.0]
We propose a framework for determining the ensemble N-representability of a p-body matrix.<n>We validate the algorithm with numerical simulations on systems of two, three, and four electrons in both, simple models as well as molecular systems at finite temperature.
arXiv Detail & Related papers (2026-02-05T20:11:03Z) - COALA: Numerically Stable and Efficient Framework for Context-Aware Low-Rank Approximation [0.0]
contexts-aware low-rank approximation is a useful tool for compression and fine-tuning of modern large-scale neural networks.<n>Existing methods for neural networks suffer from numerical instabilities due to their reliance on classical formulas involving explicit Gram matrix computation and their subsequent inversion.<n>We propose a novel inversion-free regularized framework that is based entirely on stable decompositions and overcomes the numerical pitfalls of prior art.
arXiv Detail & Related papers (2025-07-10T09:35:22Z) - Rigorous Maximum Likelihood Estimation for Quantum States [2.5782420501870296]
Existing quantum state tomography avoids rigorous termination of limited scalability due to their high computation and memory demands.<n>In this paper, we address these limitations by reforming a matrix by a factor.<n>We show that our method can demonstrate a laptop-of-the-art solution to state-of-the-art problems in under 5 hours.
arXiv Detail & Related papers (2025-06-19T23:18:50Z) - Determining the N-representability of a reduced density matrix via unitary evolution and stochastic sampling [0.0]
This work introduces a hybrid quantum-stochastic algorithm to effectively replace the N-representability conditions.
The resulting algorithm is independent of any underlying Hamiltonian, and it can be used to decide if a given p-body matrix is N-representable.
arXiv Detail & Related papers (2025-03-21T16:52:22Z) - Matrix Completion via Residual Spectral Matching [2.677354612516629]
Noisy matrix completion has attracted significant attention due to its applications in recommendation systems, signal processing and image restoration.<n>We propose a novel residual spectral matching criterion that incorporates the numerical but also locational information of residuals.<n>We derive optimal statistical properties by analyzing the spectral properties of sparse random matrices and bounding the effects of low-rank perturbations and partial observations.
arXiv Detail & Related papers (2024-12-13T09:42:42Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - A Majorization-Minimization Gauss-Newton Method for 1-Bit Matrix Completion [15.128477070895055]
We propose a novel method for 1-bit matrix completion called Majorization-Minimization Gauss-Newton (MMGN)
Our method is based on the majorization-minimization principle, which converts the original optimization problem into a sequence of standard low-rank matrix completion problems.
arXiv Detail & Related papers (2023-04-27T03:16:52Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Fast Projected Newton-like Method for Precision Matrix Estimation under
Total Positivity [15.023842222803058]
Current algorithms are designed using the block coordinate descent method or the proximal point algorithm.
We propose a novel algorithm based on the two-metric projection method, incorporating a carefully designed search direction and variable partitioning scheme.
Experimental results on synthetic and real-world datasets demonstrate that our proposed algorithm provides a significant improvement in computational efficiency compared to the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-03T14:39:10Z) - A Scalable Second Order Method for Ill-Conditioned Matrix Completion
from Few Samples [0.0]
We propose an iterative algorithm for low-rank matrix completion.
It is able to complete very ill-conditioned matrices with a condition number of up to $10$ from few samples.
arXiv Detail & Related papers (2021-06-03T20:31:00Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.