Asymmetric double-winged multi-view clustering network for exploring
Diverse and Consistent Information
- URL: http://arxiv.org/abs/2309.00474v1
- Date: Fri, 1 Sep 2023 14:13:22 GMT
- Title: Asymmetric double-winged multi-view clustering network for exploring
Diverse and Consistent Information
- Authors: Qun Zheng, Xihong Yang, Siwei Wang, Xinru An, Qi Liu
- Abstract summary: In unsupervised scenarios, deep contrastive multi-view clustering (DCMVC) is becoming a hot research spot.
We propose a novel multi-view clustering network termed CodingNet to explore the diverse and consistent information simultaneously.
Our framework's efficacy is validated through extensive experiments on six widely used benchmark datasets.
- Score: 28.300395619444796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In unsupervised scenarios, deep contrastive multi-view clustering (DCMVC) is
becoming a hot research spot, which aims to mine the potential relationships
between different views. Most existing DCMVC algorithms focus on exploring the
consistency information for the deep semantic features, while ignoring the
diverse information on shallow features. To fill this gap, we propose a novel
multi-view clustering network termed CodingNet to explore the diverse and
consistent information simultaneously in this paper. Specifically, instead of
utilizing the conventional auto-encoder, we design an asymmetric structure
network to extract shallow and deep features separately. Then, by aligning the
similarity matrix on the shallow feature to the zero matrix, we ensure the
diversity for the shallow features, thus offering a better description of
multi-view data. Moreover, we propose a dual contrastive mechanism that
maintains consistency for deep features at both view-feature and pseudo-label
levels. Our framework's efficacy is validated through extensive experiments on
six widely used benchmark datasets, outperforming most state-of-the-art
multi-view clustering algorithms.
Related papers
- Multi-view Aggregation Network for Dichotomous Image Segmentation [76.75904424539543]
Dichotomous Image (DIS) has recently emerged towards high-precision object segmentation from high-resolution natural images.
Existing methods rely on tedious multiple encoder-decoder streams and stages to gradually complete the global localization and local refinement.
Inspired by it, we model DIS as a multi-view object perception problem and provide a parsimonious multi-view aggregation network (MVANet)
Experiments on the popular DIS-5K dataset show that our MVANet significantly outperforms state-of-the-art methods in both accuracy and speed.
arXiv Detail & Related papers (2024-04-11T03:00:00Z) - Scalable Multi-view Clustering via Explicit Kernel Features Maps [20.610589722626074]
A growing awareness of multi-view learning is a consequence of the increasing prevalence of multiple views in real-world applications.
An efficient optimization strategy is proposed, leveraging kernel feature maps to reduce the computational burden while maintaining good clustering performance.
We conduct extensive experiments on real-world benchmark networks of various sizes in order to evaluate the performance of our algorithm against state-of-the-art multi-view subspace clustering methods and attributed-network multi-view approaches.
arXiv Detail & Related papers (2024-02-07T12:35:31Z) - One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering [42.92751228313385]
We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
arXiv Detail & Related papers (2024-01-28T16:30:13Z) - DealMVC: Dual Contrastive Calibration for Multi-view Clustering [78.54355167448614]
We propose a novel Dual contrastive calibration network for Multi-View Clustering (DealMVC)
We first design a fusion mechanism to obtain a global cross-view feature. Then, a global contrastive calibration loss is proposed by aligning the view feature similarity graph and the high-confidence pseudo-label graph.
During the training procedure, the interacted cross-view feature is jointly optimized at both local and global levels.
arXiv Detail & Related papers (2023-08-17T14:14:28Z) - Subspace-Contrastive Multi-View Clustering [0.0]
We propose a novel Subspace-Contrastive Multi-View Clustering (SCMC) approach.
We employ view-specific auto-encoders to map the original multi-view data into compact features perceiving its nonlinear structures.
To demonstrate the effectiveness of the proposed model, we conduct a large number of comparative experiments on eight challenge datasets.
arXiv Detail & Related papers (2022-10-13T07:19:37Z) - Deep Multi-View Semi-Supervised Clustering with Sample Pairwise
Constraints [10.226754903113164]
We propose a novel Deep Multi-view Semi-supervised Clustering (DMSC) method, which jointly optimize three kinds of losses during networks finetuning.
We demonstrate that our proposed approach performs better than the state-of-the-art multi-view and single-view competitors.
arXiv Detail & Related papers (2022-06-10T08:51:56Z) - Attentive Multi-View Deep Subspace Clustering Net [4.3386084277869505]
We propose a novel Attentive Multi-View Deep Subspace Nets (AMVDSN)
Our proposed method seeks to find a joint latent representation that explicitly considers both consensus and view-specific information.
The experimental results on seven real-world data sets have demonstrated the effectiveness of our proposed algorithm against some state-of-the-art subspace learning approaches.
arXiv Detail & Related papers (2021-12-23T12:57:26Z) - Specificity-preserving RGB-D Saliency Detection [103.3722116992476]
We propose a specificity-preserving network (SP-Net) for RGB-D saliency detection.
Two modality-specific networks and a shared learning network are adopted to generate individual and shared saliency maps.
Experiments on six benchmark datasets demonstrate that our SP-Net outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-18T14:14:22Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z) - Multi-view Deep Subspace Clustering Networks [64.29227045376359]
Multi-view subspace clustering aims to discover the inherent structure of data by fusing multiple views of complementary information.
We propose the Multi-view Deep Subspace Clustering Networks (MvDSCN), which learns a multi-view self-representation matrix in an end-to-end manner.
The MvDSCN unifies multiple backbones to boost clustering performance and avoid the need for model selection.
arXiv Detail & Related papers (2019-08-06T06:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.