Multi-Level Representation Learning for Deep Subspace Clustering
- URL: http://arxiv.org/abs/2001.08533v1
- Date: Sun, 19 Jan 2020 23:29:50 GMT
- Title: Multi-Level Representation Learning for Deep Subspace Clustering
- Authors: Mohsen Kheirandishfard, Fariba Zohrizadeh, Farhad Kamangar
- Abstract summary: This paper proposes a novel deep subspace clustering approach which uses convolutional autoencoders to transform input images into new representations lying on a union of linear subspaces.
Experiments on four real-world datasets demonstrate that our approach exhibits superior performance compared to the state-of-the-art methods on most of the subspace clustering problems.
- Score: 10.506584969668792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel deep subspace clustering approach which uses
convolutional autoencoders to transform input images into new representations
lying on a union of linear subspaces. The first contribution of our work is to
insert multiple fully-connected linear layers between the encoder layers and
their corresponding decoder layers to promote learning more favorable
representations for subspace clustering. These connection layers facilitate the
feature learning procedure by combining low-level and high-level information
for generating multiple sets of self-expressive and informative representations
at different levels of the encoder. Moreover, we introduce a novel loss
minimization problem which leverages an initial clustering of the samples to
effectively fuse the multi-level representations and recover the underlying
subspaces more accurately. The loss function is then minimized through an
iterative scheme which alternatively updates the network parameters and
produces new clusterings of the samples. Experiments on four real-world
datasets demonstrate that our approach exhibits superior performance compared
to the state-of-the-art methods on most of the subspace clustering problems.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Learning the Right Layers: a Data-Driven Layer-Aggregation Strategy for
Semi-Supervised Learning on Multilayer Graphs [2.752817022620644]
Clustering (or community detection) on multilayer graphs poses several additional complications.
One of the major challenges is to establish the extent to which each layer contributes to the cluster iteration assignment.
We propose a parameter-free Laplacian-regularized model that learns an optimal nonlinear combination of the different layers from the available input labels.
arXiv Detail & Related papers (2023-05-31T19:50:11Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Subspace-Contrastive Multi-View Clustering [0.0]
We propose a novel Subspace-Contrastive Multi-View Clustering (SCMC) approach.
We employ view-specific auto-encoders to map the original multi-view data into compact features perceiving its nonlinear structures.
To demonstrate the effectiveness of the proposed model, we conduct a large number of comparative experiments on eight challenge datasets.
arXiv Detail & Related papers (2022-10-13T07:19:37Z) - DeepCluE: Enhanced Image Clustering via Multi-layer Ensembles in Deep
Neural Networks [53.88811980967342]
This paper presents a Deep Clustering via Ensembles (DeepCluE) approach.
It bridges the gap between deep clustering and ensemble clustering by harnessing the power of multiple layers in deep neural networks.
Experimental results on six image datasets confirm the advantages of DeepCluE over the state-of-the-art deep clustering approaches.
arXiv Detail & Related papers (2022-06-01T09:51:38Z) - Deep clustering with fusion autoencoder [0.0]
Deep clustering (DC) models capitalize on autoencoders to learn intrinsic features which facilitate the clustering process in consequence.
In this paper, a novel DC method is proposed to address this issue. Specifically, the generative adversarial network and VAE are coalesced into a new autoencoder called fusion autoencoder (FAE)
arXiv Detail & Related papers (2022-01-11T07:38:03Z) - A Novel Hierarchical Light Field Coding Scheme Based on Hybrid Stacked
Multiplicative Layers and Fourier Disparity Layers for Glasses-Free 3D
Displays [0.6091702876917279]
We present a novel hierarchical coding scheme for light fields based on transmittance patterns of low-rank multiplicative layers and Fourier disparity layers.
The proposed scheme identifies multiplicative layers of light field view subsets optimized using a convolutional neural network for different scanning orders.
arXiv Detail & Related papers (2021-08-27T17:09:29Z) - A Hierarchical Coding Scheme for Glasses-free 3D Displays Based on
Scalable Hybrid Layered Representation of Real-World Light Fields [0.6091702876917279]
Scheme learns stacked multiplicative layers from subsets of light field views determined from different scanning orders.
The spatial correlation in layer patterns is exploited with varying low ranks in factorization derived from singular value decomposition on a Krylov subspace.
encoding with HEVC efficiently removes intra-view and inter-view correlation in low-rank approximated layers.
arXiv Detail & Related papers (2021-04-19T15:09:21Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z) - Rethinking and Improving Natural Language Generation with Layer-Wise
Multi-View Decoding [59.48857453699463]
In sequence-to-sequence learning, the decoder relies on the attention mechanism to efficiently extract information from the encoder.
Recent work has proposed to use representations from different encoder layers for diversified levels of information.
We propose layer-wise multi-view decoding, where for each decoder layer, together with the representations from the last encoder layer, which serve as a global view, those from other encoder layers are supplemented for a stereoscopic view of the source sequences.
arXiv Detail & Related papers (2020-05-16T20:00:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.