SimpleMKKM: Simple Multiple Kernel K-means
- URL: http://arxiv.org/abs/2005.04975v2
- Date: Tue, 12 May 2020 14:05:04 GMT
- Title: SimpleMKKM: Simple Multiple Kernel K-means
- Authors: Xinwang Liu, En Zhu, Jiyuan Liu, Timothy Hospedales, Yang Wang, Meng
Wang
- Abstract summary: We propose a simple yet effective multiple kernel clustering algorithm, termed simple multiple kernel k-means (SimpleMKKM)
Our criterion is given by an intractable minimization-maximization problem in the kernel coefficient and clustering partition matrix.
We theoretically analyze the performance of SimpleMKKM in terms of its clustering generalization error.
- Score: 49.500663154085586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple yet effective multiple kernel clustering algorithm,
termed simple multiple kernel k-means (SimpleMKKM). It extends the widely used
supervised kernel alignment criterion to multi-kernel clustering. Our criterion
is given by an intractable minimization-maximization problem in the kernel
coefficient and clustering partition matrix. To optimize it, we re-formulate
the problem as a smooth minimization one, which can be solved efficiently using
a reduced gradient descent algorithm. We theoretically analyze the performance
of SimpleMKKM in terms of its clustering generalization error. Comprehensive
experiments on 11 benchmark datasets demonstrate that SimpleMKKM outperforms
state of the art multi-kernel clustering alternatives.
Related papers
- Multiple kernel concept factorization algorithm based on global fusion [9.931283387968856]
In unsupervised environment, to design or select proper kernel function for specific dataset, a new algorithm called Globalized Multiple Kernel(GMKCF)was proposed.
The proposed algorithm outperforms comparison algorithms in data clustering, such as Kernel K-Means(KKM), Spectral Clustering(SC), CF Kernel(KCF), Co-regularized multi-view spectral clustering(Coreg), and Robust Multiple KKM(RMKKM)
arXiv Detail & Related papers (2024-10-27T09:13:57Z) - Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence [97.93517982908007]
In cross-domain few-shot classification, NCC aims to learn representations to construct a metric space where few-shot classification can be performed.
In this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes.
We propose a bi-level optimization framework, emphmaximizing optimized kernel dependence (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data.
arXiv Detail & Related papers (2024-05-29T05:59:52Z) - Accelerated sparse Kernel Spectral Clustering for large scale data
clustering problems [0.27257174044950283]
An improved version of the sparse multiway kernel spectral clustering (KSC) is presented in this brief.
The original algorithm is derived from weighted kernel principal component analysis formulated within the primal-dual least-squares support vector machine (LS-SVM) framework.
Sparsity is achieved then by the combination of the incomplete Cholesky decomposition (ICD) based low rank approximation of the kernel matrix with the so called reduced set method.
arXiv Detail & Related papers (2023-10-20T09:51:42Z) - Multi-Prototypes Convex Merging Based K-Means Clustering Algorithm [20.341309224377866]
Multi-prototypes convex merging based K-Means clustering algorithm (MCKM) is presented.
MCKM is an efficient and explainable clustering algorithm for escaping the undesirable local minima of K-Means problem without given k first.
arXiv Detail & Related papers (2023-02-14T13:57:33Z) - Multiple Kernel Clustering with Dual Noise Minimization [56.009011016367744]
Multiple kernel clustering (MKC) aims to group data by integrating complementary information from base kernels.
In this paper, we rigorously define dual noise and propose a novel parameter-free MKC algorithm by minimizing them.
We observe that dual noise will pollute the block diagonal structures and incur the degeneration of clustering performance, and C-noise exhibits stronger destruction than N-noise.
arXiv Detail & Related papers (2022-07-13T08:37:42Z) - Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph [73.68184322526338]
Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
arXiv Detail & Related papers (2022-07-05T05:00:38Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Fast Kernel k-means Clustering Using Incomplete Cholesky Factorization [11.631064399465089]
Kernel-based clustering algorithm can identify and capture the non-linear structure in datasets.
It can achieve better performance than linear clustering.
computing and storing the entire kernel matrix occupy so large memory that it is difficult for kernel-based clustering to deal with large-scale datasets.
arXiv Detail & Related papers (2020-02-07T15:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.