A weighted quantum ensemble of homogeneous quantum classifiers
- URL: http://arxiv.org/abs/2506.07810v1
- Date: Mon, 09 Jun 2025 14:38:13 GMT
- Title: A weighted quantum ensemble of homogeneous quantum classifiers
- Authors: Emiliano Tolotti, Enrico Blanzieri, Davide Pastorello,
- Abstract summary: Homogeneous ensembles use identical models, achieving diversity through different data subsets.<n>We propose a method to achieve a weighted homogeneous quantum ensemble using quantum classifiers with indexing registers for data encoding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensemble methods in machine learning aim to improve prediction accuracy by combining multiple models. This is achieved by ensuring diversity among predictors to capture different data aspects. Homogeneous ensembles use identical models, achieving diversity through different data subsets, and weighted-average ensembles assign higher influence to more accurate models through a weight learning procedure. We propose a method to achieve a weighted homogeneous quantum ensemble using quantum classifiers with indexing registers for data encoding. This approach leverages instance-based quantum classifiers, enabling feature and training point subsampling through superposition and controlled unitaries, and allowing for a quantum-parallel execution of diverse internal classifiers with different data compositions in superposition. The method integrates a learning process involving circuit execution and classical weight optimization, for a trained ensemble execution with weights encoded in the circuit at test-time. Empirical evaluation demonstrate the effectiveness of the proposed method, offering insights into its performance.
Related papers
- Self-supervised Latent Space Optimization with Nebula Variational Coding [87.20343320266215]
This paper proposes a variational inference model which leads to a clustered embedding.<n>We introduce additional variables in the latent space, called textbfnebula anchors, that guide the latent variables to form clusters during training.<n>Since each latent feature can be labeled with the closest anchor, we also propose to apply metric learning in a self-supervised way to make the separation between clusters more explicit.
arXiv Detail & Related papers (2025-06-02T08:13:32Z) - An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.<n>We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.<n>We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Understanding the Role of Functional Diversity in Weight-Ensembling with Ingredient Selection and Multidimensional Scaling [7.535219325248997]
We introduce two novel weight-ensembling approaches to study the link between performance dynamics and the nature of how each method decides to apply the functionally diverse components.
We develop a visualization tool to explain how each algorithm explores various domains defined via pairwise-distances to further investigate selection and algorithms' convergence.
arXiv Detail & Related papers (2024-09-04T00:24:57Z) - Strategic Data Re-Uploads: A Pathway to Improved Quantum Classification Data Re-Uploading Strategies for Improved Quantum Classifier Performance [0.0]
Re-uploading classical information into quantum states multiple times can enhance the accuracy of quantum classifiers.
We demonstrate our approach to two classification patterns: a linear classification pattern (LCP) and a non-linear classification pattern (NLCP)
arXiv Detail & Related papers (2024-05-15T14:28:00Z) - Ensembles of Quantum Classifiers [0.0]
A viable approach for the execution of quantum classification algorithms is the introduction of the ensemble methods.
In this work, we present an implementation and an empirical evaluation of ensembles of quantum classifiers for binary classification.
arXiv Detail & Related papers (2023-11-16T10:27:25Z) - Multimodal deep representation learning for quantum cross-platform
verification [60.01590250213637]
Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms.
We introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities.
We devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation.
arXiv Detail & Related papers (2023-11-07T04:35:03Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Ensemble-learning variational shallow-circuit quantum classifiers [4.104704267247209]
We propose two ensemble-learning classification methods, namely bootstrap aggregating and adaptive boosting.
The protocols have been exemplified for classical handwriting digits as well as quantum phase discrimination of a symmetry-protected topological Hamiltonian.
arXiv Detail & Related papers (2023-01-30T07:26:35Z) - Deep Unfolding-based Weighted Averaging for Federated Learning in
Heterogeneous Environments [11.023081396326507]
Federated learning is a collaborative model training method that iterates model updates by multiple clients and aggregation of the updates by a central server.
To adjust the aggregation weights, this paper employs deep unfolding, which is known as the parameter tuning method.
The proposed method can handle large-scale learning models with the aid of pretrained models such as it can perform practical real-world tasks.
arXiv Detail & Related papers (2022-12-23T08:20:37Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.