Multi-level Feature Learning on Embedding Layer of Convolutional
Autoencoders and Deep Inverse Feature Learning for Image Clustering
- URL: http://arxiv.org/abs/2010.02343v1
- Date: Mon, 5 Oct 2020 21:24:10 GMT
- Title: Multi-level Feature Learning on Embedding Layer of Convolutional
Autoencoders and Deep Inverse Feature Learning for Image Clustering
- Authors: Behzad Ghazanfari, Fatemeh Afghah
- Abstract summary: We use agglomerative clustering as the multi-level feature learning that provides a hierarchical structure on the latent feature space.
Applying multi-level feature learning considerably improves the basic deep convolutional embedding clustering.
Deep inverse feature learning (deep IFL) on CAE-MLE as a novel approach that leads to the state-of-the-art results.
- Score: 6.5358895450258325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces Multi-Level feature learning alongside the Embedding
layer of Convolutional Autoencoder (CAE-MLE) as a novel approach in deep
clustering. We use agglomerative clustering as the multi-level feature learning
that provides a hierarchical structure on the latent feature space. It is shown
that applying multi-level feature learning considerably improves the basic deep
convolutional embedding clustering (DCEC). CAE-MLE considers the clustering
loss of agglomerative clustering simultaneously alongside the learning latent
feature of CAE. In the following of the previous works in inverse feature
learning, we show that the representation of learning of error as a general
strategy can be applied on different deep clustering approaches and it leads to
promising results. We develop deep inverse feature learning (deep IFL) on
CAE-MLE as a novel approach that leads to the state-of-the-art results among
the same category methods. The experimental results show that the CAE-MLE
improves the results of the basic method, DCEC, around 7% -14% on two
well-known datasets of MNIST and USPS. Also, it is shown that the proposed deep
IFL improves the primary results about 9%-17%. Therefore, both proposed
approaches of CAE-MLE and deep IFL based on CAE-MLE can lead to notable
performance improvement in comparison to the majority of existing techniques.
The proposed approaches while are based on a basic convolutional autoencoder
lead to outstanding results even in comparison to variational autoencoders or
generative adversarial networks.
Related papers
- Unfolding ADMM for Enhanced Subspace Clustering of Hyperspectral Images [43.152314090830174]
We introduce an innovative clustering architecture for hyperspectral images (HSI) by unfolding an iterative solver based on the Alternating Direction Method of Multipliers (ADMM) for sparse subspace clustering.
Our approach captures well the structural characteristics of HSI data by employing the K nearest neighbors algorithm as part of a structure preservation module.
arXiv Detail & Related papers (2024-04-10T15:51:46Z) - FedAC: An Adaptive Clustered Federated Learning Framework for Heterogeneous Data [21.341280782748278]
Clustered federated learning (CFL) is proposed to mitigate the performance deterioration stemming from data heterogeneity inFL.
We propose an adaptive CFL framework, named FedAC, which efficiently integrates global knowledge into intra-cluster learning.
Experiments show that FedAC achieves superior empirical performance, increasing the test accuracy by around 1.82% and 12.67%.
arXiv Detail & Related papers (2024-03-25T06:43:28Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - ConvBLS: An Effective and Efficient Incremental Convolutional Broad
Learning System for Image Classification [63.49762079000726]
We propose a convolutional broad learning system (ConvBLS) based on the spherical K-means (SKM) algorithm and two-stage multi-scale (TSMS) feature fusion.
Our proposed ConvBLS method is unprecedentedly efficient and effective.
arXiv Detail & Related papers (2023-04-01T04:16:12Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - DeepCluE: Enhanced Image Clustering via Multi-layer Ensembles in Deep
Neural Networks [53.88811980967342]
This paper presents a Deep Clustering via Ensembles (DeepCluE) approach.
It bridges the gap between deep clustering and ensemble clustering by harnessing the power of multiple layers in deep neural networks.
Experimental results on six image datasets confirm the advantages of DeepCluE over the state-of-the-art deep clustering approaches.
arXiv Detail & Related papers (2022-06-01T09:51:38Z) - Deep clustering with fusion autoencoder [0.0]
Deep clustering (DC) models capitalize on autoencoders to learn intrinsic features which facilitate the clustering process in consequence.
In this paper, a novel DC method is proposed to address this issue. Specifically, the generative adversarial network and VAE are coalesced into a new autoencoder called fusion autoencoder (FAE)
arXiv Detail & Related papers (2022-01-11T07:38:03Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Cluster Analysis with Deep Embeddings and Contrastive Learning [0.0]
This work proposes a novel framework for performing image clustering from deep embeddings.
Our approach jointly learns representations and predicts cluster centers in an end-to-end manner.
Our framework performs on par with widely accepted clustering methods and outperforms the state-of-the-art contrastive learning method on the CIFAR-10 dataset.
arXiv Detail & Related papers (2021-09-26T22:18:15Z) - Attention-driven Graph Clustering Network [49.040136530379094]
We propose a novel deep clustering method named Attention-driven Graph Clustering Network (AGCN)
AGCN exploits a heterogeneous-wise fusion module to dynamically fuse the node attribute feature and the topological graph feature.
AGCN can jointly perform feature learning and cluster assignment in an unsupervised fashion.
arXiv Detail & Related papers (2021-08-12T02:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.