Dimension reduction with structure-aware quantum circuits for hybrid machine learning
- URL: http://arxiv.org/abs/2508.00048v1
- Date: Thu, 31 Jul 2025 17:18:43 GMT
- Title: Dimension reduction with structure-aware quantum circuits for hybrid machine learning
- Authors: Ammar Daskin,
- Abstract summary: Schmidt decomposition of a vector can be understood as writing the singular value decomposition (SVD) in vector form.<n>We show that quantum circuits designed on a value $k$ can approximate the reduced-form representations of entire datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Schmidt decomposition of a vector can be understood as writing the singular value decomposition (SVD) in vector form. A vector can be written as a linear combination of tensor product of two dimensional vectors by recursively applying Schmidt decompositions via SVD to all subsystems. Given a vector expressed as a linear combination of tensor products, using only the $k$ principal terms yields a $k$-rank approximation of the vector. Therefore, writing a vector in this reduced form allows to retain most important parts of the vector while removing small noises from it, analogous to SVD-based denoising. In this paper, we show that quantum circuits designed based on a value $k$ (determined from the tensor network decomposition of the mean vector of the training sample) can approximate the reduced-form representations of entire datasets. We then employ this circuit ansatz with a classical neural network head to construct a hybrid machine learning model. Since the output of the quantum circuit for an $2^n$ dimensional vector is an $n$ dimensional probability vector, this provides an exponential compression of the input and potentially can reduce the number of learnable parameters for training large-scale models. We use datasets provided in the Python scikit-learn module for the experiments. The results confirm the quantum circuit is able to compress data successfully to provide effective $k$-rank approximations to the classical processing component.
Related papers
- Learnable quantum spectral filters for hybrid graph neural networks [0.0]
We show that the eigenspace of the Laplacian operator of a graph can be approximated by using QFT based circuit.<n>For an $Ntimes N$ Laplacian, this approach yields an approximate-depth circuit requiring only $n=log(Nimat)$ qubits.<n>We then apply a classical neural network prediction head to the output of the circuit to construct a complete graph neural network.
arXiv Detail & Related papers (2025-07-08T03:36:40Z) - Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [63.945006006152035]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Linearithmic Clean-up for Vector-Symbolic Key-Value Memory with Kroneker Rotation Products [4.502446902578007]
A computational bottleneck in current Vector-Symbolic Architectures is the clean-up'' step.<n>We present a new codebook representation that supports efficient clean-up.<n>The resulting clean-up time complexity is linearithmic, i.e. $mathcalO(N,textlog,N)$.
arXiv Detail & Related papers (2025-06-18T18:23:28Z) - The Generative Leap: Sharp Sample Complexity for Efficiently Learning Gaussian Multi-Index Models [71.5283441529015]
In this work we consider generic Gaussian Multi-index models, in which the labels only depend on the (Gaussian) $d$-dimensional inputs through their projection onto a low-dimensional $r = O_d(1)$ subspace.<n>We introduce the generative leap exponent $kstar$, a natural extension of the generative exponent from [Damian et al.'24] to the multi-index setting.
arXiv Detail & Related papers (2025-06-05T18:34:56Z) - Quantum encoder for fixed Hamming-weight subspaces [0.0]
We present an exact $n$-qubit computational-basis amplitude encoder of real- or complex data vectors of $d=binomnk$valued into a subspace of fixed Hamming weight $k$.<n>We show how our encoder can improve the performance of variational quantum algorithms for problems that include particle-string symmetries.<n>Our results constitute a versatile framework for quantum data compression with various potential applications in fields such as quantum chemistry, quantum machine learning, and constrained $k$ optimizations.
arXiv Detail & Related papers (2024-05-30T18:26:41Z) - Dictionary-based Block Encoding of Sparse Matrices with Low Subnormalization and Circuit Depth [2.4487770108795393]
We propose an efficient block-encoding protocol for sparse matrices based on a novel data structure.<n>Non-zero elements with the same values belong to the same classification in our block-encoding protocol's dictionary.<n>Our protocol connects to linear combinations of unitaries (LCU) and the sparse access input model (SAIM)
arXiv Detail & Related papers (2024-05-28T09:49:58Z) - Factorizers for Distributed Sparse Block Codes [45.29870215671697]
We propose a fast and highly accurate method for factorizing distributed block codes (SBCs)
Our iterative factorizer introduces a threshold-based nonlinear activation, conditional random sampling, and an $ell_infty$-based similarity metric.
We demonstrate the feasibility of our method on four deep CNN architectures over CIFAR-100, ImageNet-1K, and RAVEN datasets.
arXiv Detail & Related papers (2023-03-24T12:31:48Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.