Boost clustering with Gaussian Boson Sampling: a full quantum approach
- URL: http://arxiv.org/abs/2307.13348v1
- Date: Tue, 25 Jul 2023 09:05:24 GMT
- Title: Boost clustering with Gaussian Boson Sampling: a full quantum approach
- Authors: Nicol\`o Bonaldi, Martina Rossi, Daniele Mattioli, Michele Grapulin,
Blanca Silva Fern\'andez, Davide Caputo, Marco Magagnini, Arianna Osti, and
Fabio Veronese
- Abstract summary: We propose a novel clustering approach based on Gaussian Boson Sampling (GBS)
We benchmark our approach with two well-known classical clustering algorithms.
Results show that our approach outperforms the two classical algorithms in two out of the three chosen metrics.
- Score: 0.09437521840642138
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gaussian Boson Sampling (GBS) is a recently developed paradigm of quantum
computing consisting of sending a Gaussian state through a linear
interferometer and then counting the number of photons in each output mode.
When the system encodes a symmetric matrix, GBS can be viewed as a tool to
sample subgraphs: the most sampled are those with a large number of perfect
matchings, and thus are the densest ones. This property has been the foundation
of the novel clustering approach we propose in this work, called GBS-based
clustering, which relies solely on GBS, without the need of classical
algorithms. The GBS-based clustering has been tested on several datasets and
benchmarked with two well-known classical clustering algorithms. Results
obtained by using a GBS simulator show that on average our approach outperforms
the two classical algorithms in two out of the three chosen metrics, proposing
itself as a viable full-quantum clustering option.
Related papers
- Clustering Based on Density Propagation and Subcluster Merging [92.15924057172195]
We propose a density-based node clustering approach that automatically determines the number of clusters and can be applied in both data space and graph space.
Unlike traditional density-based clustering methods, which necessitate calculating the distance between any two nodes, our proposed technique determines density through a propagation process.
arXiv Detail & Related papers (2024-11-04T04:09:36Z) - Biclustering a dataset using photonic quantum computing [0.0]
Biclustering is a problem in machine learning and data mining.
We highlight the natural relation that quantum computing models like boson and GBS have to this problem.
arXiv Detail & Related papers (2024-05-28T22:04:29Z) - Gaussian Boson Sampling to Accelerate NP-Complete Vertex-Minor Graph
Classification [0.9935277311162707]
We propose a hybrid quantum-classical algorithm for the NP-complete problem of determining if two graphs are minor of one another.
We find a graph embedding that allows trading between the one-shot classification accuracy and the amount of input squeezing.
We introduce a new classical algorithm based on graph spectra, which we show outperforms various well-known graph-similarity algorithms.
arXiv Detail & Related papers (2024-02-05T21:24:11Z) - Superclustering by finding statistically significant separable groups of
optimal gaussian clusters [0.0]
The paper presents the algorithm for clustering a dataset by grouping the optimal, from the point of view of the BIC criterion.
An essential advantage of the algorithm is its ability to predict correct supercluster for new data based on already trained clusterer.
arXiv Detail & Related papers (2023-09-05T23:49:46Z) - Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model [79.46465138631592]
We devise an efficient algorithm that recovers clusters using the observed labels.
We present Instance-Adaptive Clustering (IAC), the first algorithm whose performance matches these lower bounds both in expectation and with high probability.
arXiv Detail & Related papers (2023-06-18T08:46:06Z) - Variational Quantum and Quantum-Inspired Clustering [0.0]
We present a quantum algorithm for clustering data based on a variational quantum circuit.
The algorithm allows to classify data into many clusters, and can easily be implemented in few-qubit Noisy Intermediate-Scale Quantum (NISQ) devices.
arXiv Detail & Related papers (2022-06-20T17:02:19Z) - Practical Quantum K-Means Clustering: Performance Analysis and
Applications in Energy Grid Classification [0.0]
We propose a general, competitive, and parallelized version of quantum $k$-means clustering to avoid some pitfalls due to noisy hardware.
Using real-world German electricity grid data, we show that the new approach improves the balanced accuracy of the standard quantum $k$-means clustering by $67.8%$ with respect to the labeling of the classical algorithm.
arXiv Detail & Related papers (2021-12-15T22:10:51Z) - Lattice-Based Methods Surpass Sum-of-Squares in Clustering [98.46302040220395]
Clustering is a fundamental primitive in unsupervised learning.
Recent work has established lower bounds against the class of low-degree methods.
We show that, perhaps surprisingly, this particular clustering model textitdoes not exhibit a statistical-to-computational gap.
arXiv Detail & Related papers (2021-12-07T18:50:17Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.