An unsupervised deep learning framework via integrated optimization of
representation learning and GMM-based modeling
- URL: http://arxiv.org/abs/2009.05234v1
- Date: Fri, 11 Sep 2020 04:57:03 GMT
- Title: An unsupervised deep learning framework via integrated optimization of
representation learning and GMM-based modeling
- Authors: Jinghua Wang and Jianmin Jiang
- Abstract summary: This paper introduces a new principle of joint learning on both deep representations and GMM-based deep modeling.
In comparison with the existing work in similar areas, our objective function has two learning targets, which are created to be jointly optimized.
The compactness of clusters is significantly enhanced by reducing the intra-cluster distances, and the separability is improved by increasing the inter-cluster distances.
- Score: 31.334196673143257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While supervised deep learning has achieved great success in a range of
applications, relatively little work has studied the discovery of knowledge
from unlabeled data. In this paper, we propose an unsupervised deep learning
framework to provide a potential solution for the problem that existing deep
learning techniques require large labeled data sets for completing the training
process. Our proposed introduces a new principle of joint learning on both deep
representations and GMM (Gaussian Mixture Model)-based deep modeling, and thus
an integrated objective function is proposed to facilitate the principle. In
comparison with the existing work in similar areas, our objective function has
two learning targets, which are created to be jointly optimized to achieve the
best possible unsupervised learning and knowledge discovery from unlabeled data
sets. While maximizing the first target enables the GMM to achieve the best
possible modeling of the data representations and each Gaussian component
corresponds to a compact cluster, maximizing the second term will enhance the
separability of the Gaussian components and hence the inter-cluster distances.
As a result, the compactness of clusters is significantly enhanced by reducing
the intra-cluster distances, and the separability is improved by increasing the
inter-cluster distances. Extensive experimental results show that the propose
method can improve the clustering performance compared with benchmark methods.
Related papers
- Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - A Weighted K-Center Algorithm for Data Subset Selection [70.49696246526199]
Subset selection is a fundamental problem that can play a key role in identifying smaller portions of the training data.
We develop a novel factor 3-approximation algorithm to compute subsets based on the weighted sum of both k-center and uncertainty sampling objective functions.
arXiv Detail & Related papers (2023-12-17T04:41:07Z) - Transferable Deep Clustering Model [14.073783373395196]
We propose a novel transferable deep clustering model that can automatically adapt the cluster centroids according to the distribution of data samples.
Our approach introduces a novel attention-based module that can adapt the centroids by measuring their relationship with samples.
Experimental results on both synthetic and real-world datasets demonstrate the effectiveness and efficiency of our proposed transfer learning framework.
arXiv Detail & Related papers (2023-10-07T23:35:17Z) - GraphLearner: Graph Node Clustering with Fully Learnable Augmentation [76.63963385662426]
Contrastive deep graph clustering (CDGC) leverages the power of contrastive learning to group nodes into different clusters.
We propose a Graph Node Clustering with Fully Learnable Augmentation, termed GraphLearner.
It introduces learnable augmentors to generate high-quality and task-specific augmented samples for CDGC.
arXiv Detail & Related papers (2022-12-07T10:19:39Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Deep Conditional Gaussian Mixture Model for Constrained Clustering [7.070883800886882]
Constrained clustering can leverage prior information on a growing amount of only partially labeled data.
We propose a novel framework for constrained clustering that is intuitive, interpretable, and can be trained efficiently in the framework of gradient variational inference.
arXiv Detail & Related papers (2021-06-11T13:38:09Z) - Learning the Precise Feature for Cluster Assignment [39.320210567860485]
We propose a framework which integrates representation learning and clustering into a single pipeline for the first time.
The proposed framework exploits the powerful ability of recently developed generative models for learning intrinsic features.
Experimental results show that the performance of the proposed method is superior, or at least comparable to, the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-11T04:08:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.