Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph
- URL: http://arxiv.org/abs/2207.02846v1
- Date: Tue, 5 Jul 2022 05:00:38 GMT
- Title: Local Sample-weighted Multiple Kernel Clustering with Consensus
Discriminative Graph
- Authors: Liang Li and Siwei Wang and Xinwang Liu and En Zhu and Li Shen and
Kenli Li and Keqin Li
- Abstract summary: Multiple kernel clustering (MKC) is committed to achieving optimal information fusion from a set of base kernels.
This paper proposes a novel local sample-weighted multiple kernel clustering model.
Experimental results demonstrate that our LSWMKC possesses better local manifold representation and outperforms existing kernel or graph-based clustering algo-rithms.
- Score: 73.68184322526338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple kernel clustering (MKC) is committed to achieving optimal
information fusion from a set of base kernels. Constructing precise and local
kernel matrices is proved to be of vital significance in applications since the
unreliable distant-distance similarity estimation would degrade clustering
per-formance. Although existing localized MKC algorithms exhibit improved
performance compared to globally-designed competi-tors, most of them widely
adopt KNN mechanism to localize kernel matrix by accounting for {\tau} -nearest
neighbors. However, such a coarse manner follows an unreasonable strategy that
the ranking importance of different neighbors is equal, which is impractical in
applications. To alleviate such problems, this paper proposes a novel local
sample-weighted multiple kernel clustering (LSWMKC) model. We first construct a
consensus discriminative affinity graph in kernel space, revealing the latent
local structures. Further, an optimal neighborhood kernel for the learned
affinity graph is output with naturally sparse property and clear block
diagonal structure. Moreover, LSWMKC im-plicitly optimizes adaptive weights on
different neighbors with corresponding samples. Experimental results
demonstrate that our LSWMKC possesses better local manifold representation and
outperforms existing kernel or graph-based clustering algo-rithms. The source
code of LSWMKC can be publicly accessed from
https://github.com/liliangnudt/LSWMKC.
Related papers
- MIK: Modified Isolation Kernel for Biological Sequence Visualization, Classification, and Clustering [3.9146761527401424]
This research proposes a novel approach called the Modified Isolation Kernel (MIK) as an alternative to the Gaussian kernel.
MIK uses adaptive density estimation to capture local structures more accurately and integrates robustness measures.
It exhibits improved preservation of the local and global structure and enables better visualization of clusters and subclusters in the embedded space.
arXiv Detail & Related papers (2024-10-21T06:57:09Z) - Multiple Kernel Clustering via Local Regression Integration [4.856913393644719]
Multiple kernel methods less consider the intrinsic manifold structure of multiple kernel data.
This paper first presents the clustering method via kernelized local regression (CKLR)
We then extend it to perform clustering via the multiple kernel local regression (CMKLR)
arXiv Detail & Related papers (2024-10-20T06:26:29Z) - Distributed Clustering based on Distributional Kernel [14.797889234277978]
This paper introduces a new framework for clustering in a distributed network called Distributed Clustering based on Distributional Kernel (K) or KDC.
KDC guarantees that the combined clustering outcome from all sites is equivalent to the clustering outcome of its centralized counterpart from the combined dataset from all sites.
The distribution-based clustering leads directly to significantly better clustering outcomes than existing methods of distributed clustering.
arXiv Detail & Related papers (2024-09-14T11:40:54Z) - MOKD: Cross-domain Finetuning for Few-shot Classification via Maximizing Optimized Kernel Dependence [97.93517982908007]
In cross-domain few-shot classification, NCC aims to learn representations to construct a metric space where few-shot classification can be performed.
In this paper, we find that there exist high similarities between NCC-learned representations of two samples from different classes.
We propose a bi-level optimization framework, emphmaximizing optimized kernel dependence (MOKD) to learn a set of class-specific representations that match the cluster structures indicated by labeled data.
arXiv Detail & Related papers (2024-05-29T05:59:52Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - Multiple Kernel Clustering with Dual Noise Minimization [56.009011016367744]
Multiple kernel clustering (MKC) aims to group data by integrating complementary information from base kernels.
In this paper, we rigorously define dual noise and propose a novel parameter-free MKC algorithm by minimizing them.
We observe that dual noise will pollute the block diagonal structures and incur the degeneration of clustering performance, and C-noise exhibits stronger destruction than N-noise.
arXiv Detail & Related papers (2022-07-13T08:37:42Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - SimpleMKKM: Simple Multiple Kernel K-means [49.500663154085586]
We propose a simple yet effective multiple kernel clustering algorithm, termed simple multiple kernel k-means (SimpleMKKM)
Our criterion is given by an intractable minimization-maximization problem in the kernel coefficient and clustering partition matrix.
We theoretically analyze the performance of SimpleMKKM in terms of its clustering generalization error.
arXiv Detail & Related papers (2020-05-11T10:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.