Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph
- URL: http://arxiv.org/abs/2207.02846v1
- Date: Tue, 5 Jul 2022 05:00:38 GMT
- Title: Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph
- Authors: Liang Li and Siwei Wang and Xinwang Liu and En Zhu and Li Shen and
Kenli Li and Keqin Li
- Abstract summary: Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
- Score: 73.68184322526338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple kernel clustering (MKC) is committed to achieving optimal
information fusion from a set of base kernels. Constructing precise and local
kernel matrices is proved to be of vital significance in applications since the
unreliable distant-distance similarity estimation would degrade clustering
per-formance. Although existing localized MKC algorithms exhibit improved
performance compared to globally-designed competi-tors, most of them widely
adopt KNN mechanism to localize kernel matrix by accounting for {\tau} -nearest
neighbors. However, such a coarse manner follows an unreasonable strategy that
the ranking importance of different neighbors is equal, which is impractical in
applications. To alleviate such problems, this paper proposes a novel local
sample-weighted multiple kernel clustering (LSWMKC) model. We first construct a
consensus discriminative affinity graph in kernel space, revealing the latent
local structures. Further, an optimal neighborhood kernel for the learned
affinity graph is output with naturally sparse property and clear block
diagonal structure. Moreover, LSWMKC im-plicitly optimizes adaptive weights on
different neighbors with corresponding samples. Experimental results
demonstrate that our LSWMKC possesses better local manifold representation and
outperforms existing kernel or graph-based clustering algo-rithms. The source
code of LSWMKC can be publicly accessed from
https://github.com/liliangnudt/LSWMKC.
Related papers
- MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence [97.93517982908007]
In cross-domain few-shot classification, NCC aims to learn representations to construct a metric space where few-shot classification can be performed.
In this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes.
We propose a bi-level optimization framework, emphmaximizing optimized kernel dependence (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data.
arXiv Detail & Related papers (2024-05-29T05:59:52Z) - MeanCut: A Greedy-Optimized Graph Clustering via Path-based Similarity
and Degree Descent Criterion [0.6906005491572401]
spectral clustering is popular and attractive due to the remarkable performance, easy implementation, and strong adaptability.
We propose MeanCut as the objective function and greedily optimize it in degree descending order for a nondestructive graph partition.
The validity of our algorithm is demonstrated by testifying on real-world benchmarks and application of face recognition.
arXiv Detail & Related papers (2023-12-07T06:19:39Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Multiple Kernel Clustering with Dual Noise Minimization [56.009011016367744]
Multiple kernel clustering (MKC) aims to group data by integrating complementary information from base kernels.
In this paper, we rigorously define dual noise and propose a novel parameter-free MKC algorithm by minimizing them.
We observe that dual noise will pollute the block diagonal structures and incur the degeneration of clustering performance, and C-noise exhibits stronger destruction than N-noise.
arXiv Detail & Related papers (2022-07-13T08:37:42Z) - Very Compact Clusters with Structural Regularization via Similarity and
Connectivity [3.779514860341336]
We propose an end-to-end deep clustering algorithm, i.e., Very Compact Clusters (VCC) for the general datasets.
Our proposed approach achieves better clustering performance over most of the state-of-the-art clustering methods.
arXiv Detail & Related papers (2021-06-09T23:22:03Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - The Impact of Isolation Kernel on Agglomerative Hierarchical Clustering
Algorithms [12.363083467305787]
Agglomerative hierarchical clustering (AHC) is one of the popular clustering approaches.
Existing AHC methods, which are based on a distance measure, have difficulty in identifying adjacent clusters with varied densities.
We show that the use of a data-dependent kernel (instead of distance or existing kernel) provides an effective means to address it.
arXiv Detail & Related papers (2020-10-12T06:18:38Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - SimpleMKKM: Simple Multiple Kernel K-means [49.500663154085586]
We propose a simple yet effective multiple kernel clustering algorithm, termed simple multiple kernel k-means (SimpleMKKM)
Our criterion is given by an intractable minimization-maximization problem in the kernel coefficient and clustering partition matrix.
We theoretically analyze the performance of SimpleMKKM in terms of its clustering generalization error.
arXiv Detail & Related papers (2020-05-11T10:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.