SeeMPS: A Python-based Matrix Product State and Tensor Train Library
- URL: http://arxiv.org/abs/2601.16734v1
- Date: Fri, 23 Jan 2026 13:31:16 GMT
- Title: SeeMPS: A Python-based Matrix Product State and Tensor Train Library
- Authors: Paula García-Molina, Juan José Rodríguez-Aldavero, Jorge Gidi, Juan José García-Ripoll,
- Abstract summary: SeeMPS is a Python library dedicated to implementing tensor network algorithms based on the Matrix Product States (MPS) and Quantized Train (QTT) formalisms.<n>This library can be used for traditional quantum many-body physics applications and also for quantum-inspired numerical analysis problems.
- Score: 1.0499611180329804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce SeeMPS, a Python library dedicated to implementing tensor network algorithms based on the well-known Matrix Product States (MPS) and Quantized Tensor Train (QTT) formalisms. SeeMPS is implemented as a complete finite precision linear algebra package where exponentially large vector spaces are compressed using the MPS/TT formalism. It enables both low-level operations, such as vector addition, linear transformations, and Hadamard products, as well as high-level algorithms, including the approximation of linear equations, eigenvalue computations, and exponentially efficient Fourier transforms. This library can be used for traditional quantum many-body physics applications and also for quantum-inspired numerical analysis problems, such as solving PDEs, interpolating and integrating multidimensional functions, sampling multivariate probability distributions, etc.
Related papers
- Block encoding of sparse matrices with a periodic diagonal structure [67.45502291821956]
We provide an explicit quantum circuit for block encoding a sparse matrix with a periodic diagonal structure.<n>Various applications for the presented methodology are discussed in the context of solving differential problems.
arXiv Detail & Related papers (2026-02-11T07:24:33Z) - Who can compete with quantum computers? Lecture notes on quantum inspired tensor networks computational techniques [0.0]
The lectures include well-known algorithms to find eigenvectors of MPOs, solve linear problems, and recent learning algorithms that allow one to map a known function into an MPS.<n>The lectures end with a discussion of how to represent functions and perform calculus with tensor networks using the "quantics" representation.
arXiv Detail & Related papers (2026-01-06T14:09:10Z) - WUSH: Near-Optimal Adaptive Transforms for LLM Quantization [52.77441224845925]
Quantization to low bitwidth is a standard approach for deploying large language models.<n>A few extreme weights and activations stretch the dynamic range and reduce the effective resolution of the quantizer.<n>We derive, for the first time, closed-form optimal linear blockwise transforms for joint weight-activation quantization.
arXiv Detail & Related papers (2025-11-30T16:17:34Z) - Batch Matrix-form Equations and Implementation of Multilayer Perceptrons [11.220061576867558]
Multilayer perceptrons (MLPs) are fundamental to modern deep learning, yet their algorithmic details are rarely presented in complete, explicit emphbatch matrix-form<n>Although automatic differentiation can achieve equally high computational efficiency, the usage of batch matrix-form makes the computational structure explicit.<n>This paper fills that gap by providing a mathematically rigorous implementation-ready specification of gradients in batch matrix-form.
arXiv Detail & Related papers (2025-11-14T22:52:27Z) - Learning Grouped Lattice Vector Quantizers for Low-Bit LLM Compression [57.54335545892155]
We introduce a Grouped Lattice Vector Quantization (GLVQ) framework that assigns each group of weights a customized lattice codebook.<n>Our approach achieves a better trade-off between model size and accuracy compared to existing post-training quantization baselines.
arXiv Detail & Related papers (2025-10-23T20:19:48Z) - Efficient Quantum Access Model for Sparse Structured Matrices using Linear Combination of Things [0.6138671548064355]
We present a novel framework for Linear Combination of Unitaries (LCU)-style decomposition tailored to structured sparse matrices.<n>LCU is a foundational primitive in both variational and fault-tolerant quantum algorithms.<n>We introduce the Sigma basis, a compact set of simple, non-unitary operators that can better capture sparsity and structure.
arXiv Detail & Related papers (2025-07-04T17:05:07Z) - Towards efficient quantum algorithms for diffusion probabilistic models [27.433686030846072]
Diffusion model (DPM) is renowned for its ability to produce high-quality outputs in tasks such as image and audio generation.<n>We introduce efficient quantum algorithms for implementing DPMs through various quantum solvers.
arXiv Detail & Related papers (2025-02-20T04:39:09Z) - Efficient Variational Quantum Linear Solver for Structured Sparse Matrices [0.6138671548064355]
We show that by using an alternate basis one can better exploit the sparsity and underlying structure of matrix.
We employ the concept of unitary completion to design efficient quantum circuits for computing the global/local VQLS cost functions.
arXiv Detail & Related papers (2024-04-25T19:22:05Z) - Quantum eigenvalue processing [0.0]
Problems in linear algebra can be solved on a quantum computer by processing eigenvalues of the non-normal input matrices.
We present a Quantum EigenValue Transformation (QEVT) framework for applying arbitrary transformations on eigenvalues of block-encoded non-normal operators.
We also present a Quantum EigenValue Estimation (QEVE) algorithm for operators with real spectra.
arXiv Detail & Related papers (2024-01-11T19:49:31Z) - Quantum algorithms for matrix operations and linear systems of equations [65.62256987706128]
We propose quantum algorithms for matrix operations using the "Sender-Receiver" model.
These quantum protocols can be used as subroutines in other quantum schemes.
arXiv Detail & Related papers (2022-02-10T08:12:20Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Learning with Density Matrices and Random Features [44.98964870180375]
A density matrix describes the statistical state of a quantum system.
It is a powerful formalism to represent both the quantum and classical uncertainty of quantum systems.
This paper explores how density matrices can be used as a building block for machine learning models.
arXiv Detail & Related papers (2021-02-08T17:54:59Z) - Sketching Transformed Matrices with Applications to Natural Language
Processing [76.6222695417524]
We propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix.
We show that our approach obtains small error and is efficient in both space and time.
arXiv Detail & Related papers (2020-02-23T03:07:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.