CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional
Network for Clustering
- URL: http://arxiv.org/abs/2101.06883v1
- Date: Mon, 18 Jan 2021 05:21:59 GMT
- Title: CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional
Network for Clustering
- Authors: Guangyu Huo, Yong Zhang, Junbin Gao, Boyue Wang, Yongli Hu, and Baocai
Yin
- Abstract summary: We propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN)
CaEGCN contains four main modules: cross-attention fusion, Content Auto-encoder, Graph Convolutional Auto-encoder and self-supervised model.
Experimental results on different types of datasets prove the superiority and robustness of the proposed CaEGCN.
- Score: 51.62959830761789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the powerful learning ability of deep convolutional networks, deep
clustering methods can extract the most discriminative information from
individual data and produce more satisfactory clustering results. However,
existing deep clustering methods usually ignore the relationship between the
data. Fortunately, the graph convolutional network can handle such
relationship, opening up a new research direction for deep clustering. In this
paper, we propose a cross-attention based deep clustering framework, named
Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN),
which contains four main modules: the cross-attention fusion module which
innovatively concatenates the Content Auto-encoder module (CAE) relating to the
individual data and Graph Convolutional Auto-encoder module (GAE) relating to
the relationship between the data in a layer-by-layer manner, and the
self-supervised model that highlights the discriminative information for
clustering tasks. While the cross-attention fusion module fuses two kinds of
heterogeneous representation, the CAE module supplements the content
information for the GAE module, which avoids the over-smoothing problem of GCN.
In the GAE module, two novel loss functions are proposed that reconstruct the
content and relationship between the data, respectively. Finally, the
self-supervised module constrains the distributions of the middle layer
representations of CAE and GAE to be consistent. Experimental results on
different types of datasets prove the superiority and robustness of the
proposed CaEGCN.
Related papers
- Dual Information Enhanced Multi-view Attributed Graph Clustering [11.624319530337038]
A novel Dual Information enhanced multi-view Attributed Graph Clustering (DIAGC) method is proposed in this paper.
The proposed method introduces the Specific Information Reconstruction (SIR) module to disentangle the explorations of the consensus and specific information from multiple views.
The Mutual Information Maximization (MIM) module maximizes the agreement between the latent high-level representation and low-level ones, and enables the high-level representation to satisfy the desired clustering structure.
arXiv Detail & Related papers (2022-11-28T01:18:04Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Attention-driven Graph Clustering Network [49.040136530379094]
We propose a novel deep clustering method named Attention-driven Graph Clustering Network (AGCN)
AGCN exploits a heterogeneous-wise fusion module to dynamically fuse the node attribute feature and the topological graph feature.
AGCN can jointly perform feature learning and cluster assignment in an unsupervised fashion.
arXiv Detail & Related papers (2021-08-12T02:30:38Z) - A Framework for Joint Unsupervised Learning of Cluster-Aware Embedding
for Heterogeneous Networks [6.900303913555705]
Heterogeneous Information Network (HIN) embedding refers to the low-dimensional projections of the HIN nodes that preserve the HIN structure and semantics.
We propose ours for joint learning of cluster embeddings as well as cluster-aware HIN embedding.
arXiv Detail & Related papers (2021-08-09T11:36:36Z) - Deep Fusion Clustering Network [38.540761683389135]
We propose a Deep Fusion Clustering Network (DFCN) for deep clustering.
In our network, an interdependency learning-based Structure and Attribute Information Fusion (SAIF) module is proposed to explicitly merge the representations learned by an autoencoder and a graph autoencoder.
Experiments on six benchmark datasets have demonstrated that the proposed DFCN consistently outperforms the state-of-the-art deep clustering methods.
arXiv Detail & Related papers (2020-12-15T09:37:59Z) - CoADNet: Collaborative Aggregation-and-Distribution Networks for
Co-Salient Object Detection [91.91911418421086]
Co-Salient Object Detection (CoSOD) aims at discovering salient objects that repeatedly appear in a given query group containing two or more relevant images.
One challenging issue is how to effectively capture co-saliency cues by modeling and exploiting inter-image relationships.
We present an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images.
arXiv Detail & Related papers (2020-11-10T04:28:11Z) - Learning to Cluster Faces via Confidence and Connectivity Estimation [136.5291151775236]
We propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs.
Our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.
arXiv Detail & Related papers (2020-04-01T13:39:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.