Pseudo-supervised Deep Subspace Clustering
- URL: http://arxiv.org/abs/2104.03531v1
- Date: Thu, 8 Apr 2021 06:25:47 GMT
- Title: Pseudo-supervised Deep Subspace Clustering
- Authors: Juncheng Lv and Zhao Kang and Xiao Lu and Zenglin Xu
- Abstract summary: Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance.
However, self-reconstruction loss of an AE ignores rich useful relation information.
It is also challenging to learn high-level similarity without feeding semantic labels.
- Score: 27.139553299302754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved
impressive performance due to the powerful representation extracted using deep
neural networks while prioritizing categorical separability. However,
self-reconstruction loss of an AE ignores rich useful relation information and
might lead to indiscriminative representation, which inevitably degrades the
clustering performance. It is also challenging to learn high-level similarity
without feeding semantic labels. Another unsolved problem facing DSC is the
huge memory cost due to $n\times n$ similarity matrix, which is incurred by the
self-expression layer between an encoder and decoder. To tackle these problems,
we use pairwise similarity to weigh the reconstruction loss to capture local
structure information, while a similarity is learned by the self-expression
layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain
knowledge acquired during network training, are further employed to supervise
similarity learning. Joint learning and iterative training facilitate to obtain
an overall optimal solution. Extensive experiments on benchmark datasets
demonstrate the superiority of our approach. By combining with the $k$-nearest
neighbors algorithm, we further show that our method can address the
large-scale and out-of-sample problems.
Related papers
- Unfolding ADMM for Enhanced Subspace Clustering of Hyperspectral Images [43.152314090830174]
We introduce an innovative clustering architecture for hyperspectral images (HSI) by unfolding an iterative solver based on the Alternating Direction Method of Multipliers (ADMM) for sparse subspace clustering.
Our approach captures well the structural characteristics of HSI data by employing the K nearest neighbors algorithm as part of a structure preservation module.
arXiv Detail & Related papers (2024-04-10T15:51:46Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Learning Deep Optimal Embeddings with Sinkhorn Divergences [33.496926214655666]
Deep Metric Learning algorithms aim to learn an efficient embedding space to preserve the similarity relationships among the input data.
These algorithms have achieved significant performance gains across a wide plethora of tasks, but fail to consider and increase comprehensive similarity constraints.
Here, we address the concern of learning a discriminative deep embedding space by designing a novel, yet effective Deep Class-wise Discrepancy Loss function.
arXiv Detail & Related papers (2022-09-14T07:54:16Z) - Hyperspherical Consistency Regularization [45.00073340936437]
We explore the relationship between self-supervised learning and supervised learning, and study how self-supervised learning helps robust data-efficient deep learning.
We propose hyperspherical consistency regularization (HCR), a simple yet effective plug-and-play method, to regularize the classifier using feature-dependent information and thus avoid bias from labels.
arXiv Detail & Related papers (2022-06-02T02:41:13Z) - CCLF: A Contrastive-Curiosity-Driven Learning Framework for
Sample-Efficient Reinforcement Learning [56.20123080771364]
We develop a model-agnostic Contrastive-Curiosity-Driven Learning Framework (CCLF) for reinforcement learning.
CCLF fully exploit sample importance and improve learning efficiency in a self-supervised manner.
We evaluate this approach on the DeepMind Control Suite, Atari, and MiniGrid benchmarks.
arXiv Detail & Related papers (2022-05-02T14:42:05Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Meta Clustering Learning for Large-scale Unsupervised Person
Re-identification [124.54749810371986]
We propose a "small data for big task" paradigm dubbed Meta Clustering Learning (MCL)
MCL only pseudo-labels a subset of the entire unlabeled data via clustering to save computing for the first-phase training.
Our method significantly saves computational cost while achieving a comparable or even better performance compared to prior works.
arXiv Detail & Related papers (2021-11-19T04:10:18Z) - Robust Self-Ensembling Network for Hyperspectral Image Classification [38.84831094095329]
We propose a robust self-ensembling network (RSEN) to address this problem.
The proposed RSEN consists of twoworks including a base network and an ensemble network.
We show that the proposed algorithm can yield competitive performance compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-08T13:33:14Z) - NN-EVCLUS: Neural Network-based Evidential Clustering [6.713564212269253]
We introduce a neural-network based evidential clustering algorithm, called NN-EVCLUS.
It learns a mapping from attribute vectors to mass functions, in such a way that more similar inputs are mapped to output mass functions with a lower degree of conflict.
The network is trained to minimize the discrepancy between dissimilarities and degrees of conflict for all or some object pairs.
arXiv Detail & Related papers (2020-09-27T09:05:41Z) - Dual Adversarial Auto-Encoders for Clustering [152.84443014554745]
We propose Dual Adversarial Auto-encoder (Dual-AAE) for unsupervised clustering.
By performing variational inference on the objective function of Dual-AAE, we derive a new reconstruction loss which can be optimized by training a pair of Auto-encoders.
Experiments on four benchmarks show that Dual-AAE achieves superior performance over state-of-the-art clustering methods.
arXiv Detail & Related papers (2020-08-23T13:16:34Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.