How many qubits does a machine learning problem require?
- URL: http://arxiv.org/abs/2508.20992v1
- Date: Thu, 28 Aug 2025 16:55:50 GMT
- Title: How many qubits does a machine learning problem require?
- Authors: Sydney Leither, Michael Kubal, Sonika Johri,
- Abstract summary: We show that the property of universal approximation is constructively and efficiently realized by the recently proposed bit-bit encoding scheme.<n>This construction allows us to calculate the number of qubits required to solve a learning problem on a dataset to a target accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: For a machine learning paradigm to be generally applicable, it should have the property of universal approximation, that is, it should be able to approximate any target function to any desired degree of accuracy. In variational quantum machine learning, the class of functions that can be learned depend on both the data encoding scheme as well as the architecture of the optimizable part of the model. Here, we show that the property of universal approximation is constructively and efficiently realized by the recently proposed bit-bit encoding scheme. Further, we show that this construction allows us to calculate the number of qubits required to solve a learning problem on a dataset to a target accuracy, giving rise to the first resource estimation framework for variational quantum machine learning. We apply bit-bit encoding to a number of medium-sized datasets from OpenML and find that they require only $20$ qubits on average for encoding. Further, we extend the basic bit-bit encoding scheme to one that can handle batching very large datasets. As a demonstration, we apply this new scheme to the giga-scale transcriptomic Tahoe-100M dataset, concluding that the number of qubits required for encoding it lies beyond classical simulation capabilities. Remarkably, we find that the number of qubits does not necessarily increase with the number of features of a dataset, but may sometimes even decrease.
Related papers
- An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.<n>We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.<n>We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Bit-bit encoding, optimizer-free training and sub-net initialization: techniques for scalable quantum machine learning [0.0]
We present a quantum classifier that encodes both the input and the output as binary strings.<n>We show that if one parameter is updated at a time, quantum models can be trained in a way that guarantees convergence to a local minimum.
arXiv Detail & Related papers (2025-01-04T00:35:14Z) - Supervised binary classification of small-scale digit images and weighted graphs with a trapped-ion quantum processor [56.089799129458875]
We present the results of benchmarking a quantum processor based on trapped $171$Yb$+$ ions.<n>We perform a supervised binary classification on two types of datasets: small binary digit images and weighted graphs with a ring topology.
arXiv Detail & Related papers (2024-06-17T18:20:51Z) - Data-driven decoding of quantum error correcting codes using graph neural networks [0.0]
We explore a model-free, data-driven, approach to decoding, using a graph neural network (GNN)<n>We show that the GNN-based decoder can outperform a matching decoder for circuit level noise on the surface code given only simulated data.<n>The results show that a purely data-driven approach to decoding may be a viable future option for practical quantum error correction.
arXiv Detail & Related papers (2023-07-03T17:25:45Z) - The case for 4-bit precision: k-bit Inference Scaling Laws [75.4335600212427]
Quantization methods reduce the number of bits required to represent each parameter in a model.
The final model size depends on both the number of parameters of the original model and the rate of compression.
We run more than 35,000 zero-shot experiments with 16-bit inputs and k-bit parameters to examine which quantization methods improve scaling for 3 to 8-bit precision.
arXiv Detail & Related papers (2022-12-19T18:48:33Z) - Quantum state preparation protocol for encoding classical data into the
amplitudes of a quantum information processing register's wave function [0.0]
We present a protocol for encoding $N$ real numbers stored in $N$ memory registers into the amplitudes of the quantum superposition.
The protocol combines partial CNOT gate rotations with probabilistic projection onto the desired state.
arXiv Detail & Related papers (2021-07-29T16:02:38Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Quantum Ensemble for Classification [2.064612766965483]
A powerful way to improve performance in machine learning is to construct an ensemble that combines the predictions of multiple models.
We propose a new quantum algorithm that exploits quantum superposition, entanglement and interference to build an ensemble of classification models.
arXiv Detail & Related papers (2020-07-02T11:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.