The Binary and Ternary Quantization Can Improve Feature Discrimination
- URL: http://arxiv.org/abs/2504.13792v1
- Date: Fri, 18 Apr 2025 16:44:12 GMT
- Title: The Binary and Ternary Quantization Can Improve Feature Discrimination
- Authors: Weizhi Lu, Mingrui Chen, Weiyu Li,
- Abstract summary: In machine learning, quantization is widely used to simplify data representation and facilitate algorithm deployment on hardware.<n>Current research focuses on quantization errors, operating under the premise that higher quantization errors generally result in lower classification performance.<n>We show that certain extremely low bit-width quantization methods, such as $0,1$-binary quantization and $0, pm1$-ternary quantization, can achieve comparable or even superior classification accuracy.
- Score: 8.723496120436169
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In machine learning, quantization is widely used to simplify data representation and facilitate algorithm deployment on hardware. Given the fundamental role of classification in machine learning, it is crucial to investigate the impact of quantization on classification. Current research primarily focuses on quantization errors, operating under the premise that higher quantization errors generally result in lower classification performance. However, this premise lacks a solid theoretical foundation and often contradicts empirical findings. For instance, certain extremely low bit-width quantization methods, such as $\{0,1\}$-binary quantization and $\{0, \pm1\}$-ternary quantization, can achieve comparable or even superior classification accuracy compared to the original non-quantized data, despite exhibiting high quantization errors. To more accurately evaluate classification performance, we propose to directly investigate the feature discrimination of quantized data, instead of analyzing its quantization error. Interestingly, it is found that both binary and ternary quantization methods can improve, rather than degrade, the feature discrimination of the original data. This remarkable performance is validated through classification experiments across various data types, including images, speech, and texts.
Related papers
- Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Ensembles of Quantum Classifiers [0.0]
A viable approach for the execution of quantum classification algorithms is the introduction of the ensemble methods.
In this work, we present an implementation and an empirical evaluation of ensembles of quantum classifiers for binary classification.
arXiv Detail & Related papers (2023-11-16T10:27:25Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Ensemble-learning variational shallow-circuit quantum classifiers [4.104704267247209]
We propose two ensemble-learning classification methods, namely bootstrap aggregating and adaptive boosting.
The protocols have been exemplified for classical handwriting digits as well as quantum phase discrimination of a symmetry-protected topological Hamiltonian.
arXiv Detail & Related papers (2023-01-30T07:26:35Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Multi-Label Quantification [78.83284164605473]
Quantification, variously called "labelled prevalence estimation" or "learning to quantify", is the supervised learning task of generating predictors of the relative frequencies of the classes of interest in unsupervised data samples.
We propose methods for inferring estimators of class prevalence values that strive to leverage the dependencies among the classes of interest in order to predict their relative frequencies more accurately.
arXiv Detail & Related papers (2022-11-15T11:29:59Z) - Ternary and Binary Quantization for Improved Classification [11.510216175832568]
We study the methodology of first reducing data dimension by random projection and then quantizing the projections to ternary or binary codes.
We observe that the quantization could provide comparable and often superior accuracy, as the data to be quantized are sparse features generated with common filters.
arXiv Detail & Related papers (2022-03-31T05:04:52Z) - Regularized Classification-Aware Quantization [39.04839665081476]
We present a class of algorithms that learn distributed quantization schemes for binary classification tasks.
Our method is called Regularized Classification-Aware Quantization.
arXiv Detail & Related papers (2021-07-12T21:27:48Z) - Robust quantum classifier with minimal overhead [0.8057006406834467]
Several quantum algorithms for binary classification based on the kernel method have been proposed.
These algorithms rely on estimating an expectation value, which in turn requires an expensive quantum data encoding procedure to be repeated many times.
We show that the kernel-based binary classification can be performed with a single-qubit measurement regardless of the number and the dimension of the data.
arXiv Detail & Related papers (2021-04-16T14:51:00Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.