Quantum Spectral Clustering: Comparing Parameterized and Neuromorphic Quantum Kernels
- URL: http://arxiv.org/abs/2507.07018v1
- Date: Wed, 09 Jul 2025 16:46:49 GMT
- Title: Quantum Spectral Clustering: Comparing Parameterized and Neuromorphic Quantum Kernels
- Authors: Donovan Slabbert, Dean Brand, Francesco Petruccione,
- Abstract summary: We compare a parameterized quantum kernel (pQK) with a quantum leaky integrate-and-fire (QLIF) neuromorphic computing approach.<n>For the synthetic datasets and textttIris, the QLIF kernel typically achieves better classification and clustering performance than pQK.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We compare a parameterized quantum kernel (pQK) with a quantum leaky integrate-and-fire (QLIF) neuromorphic computing approach that employs either the Victor-Purpura or van Rossum kernel in a spectral clustering task, as well as the classical radial basis function (RBF) kernel. Performance evaluation includes label-based classification and clustering metrics, as well as optimal number of clusters predictions for each dataset based on an elbow-like curve as is typically used in $K$-means clustering. The pQK encodes feature vectors through angle encoding with rotation angles scaled parametrically. Parameters are optimized through grid search to maximize kernel-target alignment, producing a kernel that reflects distances in the feature space. The quantum neuromorphic approach uses population coding to transform data into spike trains, which are then processed using temporal distance metrics. Kernel matrices are used as input into a classical spectral clustering pipeline prior to performance evaluation. For the synthetic datasets and \texttt{Iris}, the QLIF kernel typically achieves better classification and clustering performance than pQK. However, on higher-dimensional datasets, such as a preprocessed version of the Sloan Digital Sky Survey (\texttt{SDSS}), pQK performed better, indicating a possible advantage in higher-dimensional regimes.
Related papers
- K*-Means: A Parameter-free Clustering Algorithm [55.20132267309382]
k*-means is a novel clustering algorithm that eliminates the need to set k or any other parameters.<n>It uses the minimum description length principle to automatically determine the optimal number of clusters, k*, by splitting and merging clusters.<n>We prove that k*-means is guaranteed to converge and demonstrate experimentally that it significantly outperforms existing methods in scenarios where k is unknown.
arXiv Detail & Related papers (2025-05-17T08:41:07Z) - Self-Tuning Spectral Clustering for Speaker Diarization [19.487611671681968]
We present a novel pruning algorithm to create a sparse affinity matrix called spectral clustering on p-neighborhood retained affinity matrix (SC-pNA)<n>Our method improves on node-specific fixed neighbor selection by allowing a variable number of neighbors, eliminating the need for external tuning data.<n> Experimental results on the challenging DIHARD-III dataset highlight the superiority of SC-pNA.
arXiv Detail & Related papers (2024-09-16T05:07:47Z) - Revisiting Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [69.15976031704687]
We propose IAC (Instance-Adaptive Clustering), the first algorithm whose performance matches the instance-specific lower bounds both in expectation and with high probability.<n>IAC maintains an overall computational complexity of $ mathcalO(n, textpolylog(n) $, making it scalable and practical for large-scale problems.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.<n>We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.<n>The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph [73.68184322526338]
Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
arXiv Detail & Related papers (2022-07-05T05:00:38Z) - DRBM-ClustNet: A Deep Restricted Boltzmann-Kohonen Architecture for Data
Clustering [0.0]
A Bayesian Deep Restricted Boltzmann-Kohonen architecture for data clustering termed as DRBM-ClustNet is proposed.
The processing of unlabeled data is done in three stages for efficient clustering of the non-linearly separable datasets.
The framework is evaluated based on clustering accuracy and ranked against other state-of-the-art clustering methods.
arXiv Detail & Related papers (2022-05-13T15:12:18Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Average Sensitivity of Spectral Clustering [31.283432482502278]
We study the stability of spectral clustering against edge perturbations in the input graph.
Our results suggest that spectral clustering is stable against edge perturbations when there is a cluster structure in the input graph.
arXiv Detail & Related papers (2020-06-07T09:14:44Z) - An End-to-End Graph Convolutional Kernel Support Vector Machine [0.0]
A kernel-based support vector machine (SVM) for graph classification is proposed.
The proposed model is trained in a supervised end-to-end manner.
Experimental results demonstrate that the proposed model outperforms existing deep learning baseline models on a number of datasets.
arXiv Detail & Related papers (2020-02-29T09:57:42Z) - Fast Kernel k-means Clustering Using Incomplete Cholesky Factorization [11.631064399465089]
Kernel-based clustering algorithm can identify and capture the non-linear structure in datasets.
It can achieve better performance than linear clustering.
computing and storing the entire kernel matrix occupy so large memory that it is difficult for kernel-based clustering to deal with large-scale datasets.
arXiv Detail & Related papers (2020-02-07T15:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.