Neural Manifold Clustering and Embedding
- URL: http://arxiv.org/abs/2201.10000v1
- Date: Mon, 24 Jan 2022 23:13:37 GMT
- Title: Neural Manifold Clustering and Embedding
- Authors: Zengyi Li, Yubei Chen, Yann LeCun, Friedrich T. Sommer
- Abstract summary: Non-linear subspace clustering or manifold clustering aims to cluster data points based on manifold structures and learn to parameterize each manifold as a linear subspace in a feature space.
Deep neural networks have the potential to achieve this goal under highly non-linear settings given their large capacity and flexibility.
We argue that achieving manifold clustering with neural networks requires two essential ingredients: a domain-specific constraint that ensures the identification of the manifold, and a learning algorithm for embedding each manifold to a linear subspace in the feature space.
- Score: 13.08270828061924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Given a union of non-linear manifolds, non-linear subspace clustering or
manifold clustering aims to cluster data points based on manifold structures
and also learn to parameterize each manifold as a linear subspace in a feature
space. Deep neural networks have the potential to achieve this goal under
highly non-linear settings given their large capacity and flexibility. We argue
that achieving manifold clustering with neural networks requires two essential
ingredients: a domain-specific constraint that ensures the identification of
the manifolds, and a learning algorithm for embedding each manifold to a linear
subspace in the feature space. This work shows that many constraints can be
implemented by data augmentation. For subspace feature learning, Maximum Coding
Rate Reduction (MCR$^2$) objective can be used. Putting them together yields
{\em Neural Manifold Clustering and Embedding} (NMCE), a novel method for
general purpose manifold clustering, which significantly outperforms
autoencoder-based deep subspace clustering. Further, on more challenging
natural image datasets, NMCE can also outperform other algorithms specifically
designed for clustering. Qualitatively, we demonstrate that NMCE learns a
meaningful and interpretable feature space. As the formulation of NMCE is
closely related to several important Self-supervised learning (SSL) methods, we
believe this work can help us build a deeper understanding on SSL
representation learning.
Related papers
- OMH: Structured Sparsity via Optimally Matched Hierarchy for Unsupervised Semantic Segmentation [69.37484603556307]
Un Semantic segmenting (USS) involves segmenting images without relying on predefined labels.
We introduce a novel approach called Optimally Matched Hierarchy (OMH) to simultaneously address the above issues.
Our OMH yields better unsupervised segmentation performance compared to existing USS methods.
arXiv Detail & Related papers (2024-03-11T09:46:41Z) - Deep Structure and Attention Aware Subspace Clustering [29.967881186297582]
We propose a novel Deep Structure and Attention aware Subspace Clustering (DSASC)
We use a vision transformer to extract features, and the extracted features are divided into two parts, structure features, and content features.
Our method significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-25T01:19:47Z) - Multi-view Subspace Adaptive Learning via Autoencoder and Attention [3.8574404853067215]
We propose a new Multiview Subspace Adaptive Learning based on Attention and Autoencoder (MSALAA)
This method combines a deep autoencoder and a method for aligning the self-representations of various views.
We empirically observe significant improvement over existing baseline methods on six real-life datasets.
arXiv Detail & Related papers (2022-01-01T11:31:52Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Attention-driven Graph Clustering Network [49.040136530379094]
We propose a novel deep clustering method named Attention-driven Graph Clustering Network (AGCN)
AGCN exploits a heterogeneous-wise fusion module to dynamically fuse the node attribute feature and the topological graph feature.
AGCN can jointly perform feature learning and cluster assignment in an unsupervised fashion.
arXiv Detail & Related papers (2021-08-12T02:30:38Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Joint Optimization of an Autoencoder for Clustering and Embedding [22.16059261437617]
We present an alternative where the autoencoder and the clustering are learned simultaneously.
That simple neural network, referred to as the clustering module, can be integrated into a deep autoencoder resulting in a deep clustering model.
arXiv Detail & Related papers (2020-12-07T14:38:10Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - AutoEmbedder: A semi-supervised DNN embedding system for clustering [0.0]
This paper introduces a novel embedding system named AutoEmbedder, that downsamples higher dimensional data to clusterable embedding points.
The training process is semi-supervised and uses Siamese network architecture to compute pairwise constraint loss in the feature learning phase.
arXiv Detail & Related papers (2020-07-11T19:00:45Z) - Spatial and spectral deep attention fusion for multi-channel speech
separation using deep embedding features [60.20150317299749]
Multi-channel deep clustering (MDC) has acquired a good performance for speech separation.
We propose a deep attention fusion method to dynamically control the weights of the spectral and spatial features and combine them deeply.
Experimental results show that the proposed method outperforms MDC baseline and even better than the ideal binary mask (IBM)
arXiv Detail & Related papers (2020-02-05T03:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.