On the Impact of Weight Discretization in QUBO-Based SVM Training
- URL: http://arxiv.org/abs/2510.26323v1
- Date: Thu, 30 Oct 2025 10:17:25 GMT
- Title: On the Impact of Weight Discretization in QUBO-Based SVM Training
- Authors: Sascha Mücke,
- Abstract summary: We study how the number of qubits affects predictive performance across datasets.<n>We find that even low-precision QUBO encodings yield competitive, and sometimes superior, accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training Support Vector Machines (SVMs) can be formulated as a QUBO problem, enabling the use of quantum annealing for model optimization. In this work, we study how the number of qubits - linked to the discretization level of dual weights - affects predictive performance across datasets. We compare QUBO-based SVM training to the classical LIBSVM solver and find that even low-precision QUBO encodings (e.g., 1 bit per parameter) yield competitive, and sometimes superior, accuracy. While increased bit-depth enables larger regularization parameters, it does not always improve classification. Our findings suggest that selecting the right support vectors may matter more than their precise weighting. Although current hardware limits the size of solvable QUBOs, our results highlight the potential of quantum annealing for efficient SVM training as quantum devices scale.
Related papers
- Kernel-based optimization of measurement operators for quantum reservoir computers [5.486630950557179]
We formulate the training of both stateless (quantum extreme learning machines, QELMs) and stateful (memory dependent) QRCs in the framework of kernel ridge regression.<n>This approach renders an optimal measurement operator that minimizes prediction error for a given reservoir and training dataset.
arXiv Detail & Related papers (2026-02-16T12:04:42Z) - Learning Grouped Lattice Vector Quantizers for Low-Bit LLM Compression [57.54335545892155]
We introduce a Grouped Lattice Vector Quantization (GLVQ) framework that assigns each group of weights a customized lattice codebook.<n>Our approach achieves a better trade-off between model size and accuracy compared to existing post-training quantization baselines.
arXiv Detail & Related papers (2025-10-23T20:19:48Z) - MSQ: Memory-Efficient Bit Sparsification Quantization [11.510434574824213]
Mixed-precision quantization is widely favored, as it offers a superior balance between efficiency and accuracy.<n>We propose Memory-Efficient Bit Sparsification Quantization (MSQ), a novel approach that addresses these limitations.<n>MSQ achieves up to 8.00x reduction in trainable parameters and up to 86% reduction in training time compared to previous bit-level quantization.
arXiv Detail & Related papers (2025-07-30T03:21:29Z) - Quantum Annealing for Machine Learning: Applications in Feature Selection, Instance Selection, and Clustering [41.94295877935867]
We implement both quantum and classical solvers to compare their effectiveness.<n>For feature selection, we propose several QUBO configurations that balance feature importance and redundancy.<n>In instance selection, we propose a few novels for instance-level importance measures that extend existing methods.<n>For clustering, we embed a classical-to-quantum pipeline, using classical clustering followed by QUBO-based medoid refinement.
arXiv Detail & Related papers (2025-07-20T17:59:14Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.<n>We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.<n>We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Probabilistic Quantum SVM Training on Ising Machine [2.44505480142099]
We propose a probabilistic quantum SVM training framework suitable for Coherent Ising Machines (CIMs)<n>We employ batch processing and multi-batch ensemble strategies, enabling small-scale quantum devices to train SVMs on larger datasets.<n>Our method is validated through simulations and real-machine experiments on binary and multi-class datasets.
arXiv Detail & Related papers (2025-03-20T17:20:26Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping [60.086820254217336]
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
arXiv Detail & Related papers (2022-12-22T13:23:19Z) - Variational Quantum Approximate Support Vector Machine With Inference
Transfer [0.8057006406834467]
A kernel-based quantum machine learning technique for hyperlinear classification of complex data is presented.
A support vector machine can be realized inherently and explicitly on quantum circuits.
The accuracy of iris data classification reached 98.8%.
arXiv Detail & Related papers (2022-06-29T09:56:59Z) - Handling Imbalanced Classification Problems With Support Vector Machines
via Evolutionary Bilevel Optimization [73.17488635491262]
Support vector machines (SVMs) are popular learning algorithms to deal with binary classification problems.
This article introduces EBCS-SVM: evolutionary bilevel cost-sensitive SVMs.
arXiv Detail & Related papers (2022-04-21T16:08:44Z) - Quantum Machine Learning Framework for Virtual Screening in Drug
Discovery: a Prospective Quantum Advantage [0.0]
We show that a quantum integrated workflow can provide a tangible advantage compared to state-of-art classical algorithms.
We also test our algorithm on IBM Quantum processors using ADRB2 and COVID-19 datasets, showing that hardware simulations provide results in line with the predicted performances and can surpass classical equivalents.
arXiv Detail & Related papers (2022-04-08T12:05:27Z) - Estimating Average Treatment Effects with Support Vector Machines [77.34726150561087]
Support vector machine (SVM) is one of the most popular classification algorithms in the machine learning literature.
We adapt SVM as a kernel-based weighting procedure that minimizes the maximum mean discrepancy between the treatment and control groups.
We characterize the bias of causal effect estimation arising from this trade-off, connecting the proposed SVM procedure to the existing kernel balancing methods.
arXiv Detail & Related papers (2021-02-23T20:22:56Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.