Dissimilarity Mixture Autoencoder for Deep Clustering
- URL: http://arxiv.org/abs/2006.08177v4
- Date: Thu, 15 Jul 2021 16:33:02 GMT
- Title: Dissimilarity Mixture Autoencoder for Deep Clustering
- Authors: Juan S. Lara, Fabio A. Gonz\'alez
- Abstract summary: The dissimilarity mixture autoencoder (DMAE) is a neural network model for feature-based clustering.
DMAE can be integrated with deep learning architectures into end-to-end models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dissimilarity mixture autoencoder (DMAE) is a neural network model for
feature-based clustering that incorporates a flexible dissimilarity function
and can be integrated into any kind of deep learning architecture. It
internally represents a dissimilarity mixture model (DMM) that extends
classical methods like K-Means, Gaussian mixture models, or Bregman clustering
to any convex and differentiable dissimilarity function through the
reinterpretation of probabilities as neural network representations. DMAE can
be integrated with deep learning architectures into end-to-end models, allowing
the simultaneous estimation of the clustering and neural network's parameters.
Experimental evaluation was performed on image and text clustering benchmark
datasets showing that DMAE is competitive in terms of unsupervised
classification accuracy and normalized mutual information. The source code with
the implementation of DMAE is publicly available at:
https://github.com/juselara1/dmae
Related papers
- Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets [1.8434042562191815]
We derive expressions for the Bayes optimal decision boundaries in binary classification of high dimensional overlapping Gaussian mixture model (GMM) data.
We empirically demonstrate, through experiments on synthetic GMMs inspired by real-world data, that deep neural networks trained for classification, learn predictors which approximate the derived optimal classifiers.
arXiv Detail & Related papers (2024-05-28T17:59:31Z) - A Hybrid of Generative and Discriminative Models Based on the
Gaussian-coupled Softmax Layer [5.33024001730262]
We propose a method to train a hybrid of discriminative and generative models in a single neural network.
We demonstrate that the proposed hybrid model can be applied to semi-supervised learning and confidence calibration.
arXiv Detail & Related papers (2023-05-10T05:48:22Z) - Clustering with Neural Network and Index [0.0]
A new model called Clustering with Neural Network and Index (CNNI) is introduced.
CNNI uses a Neural Network to cluster data points, with an internal clustering evaluation index acting as the loss function.
arXiv Detail & Related papers (2022-12-05T12:33:26Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning [72.9458277424712]
Mixture Model Auto-Encoders (MixMate) is a novel architecture that clusters data by performing inference on a generative model.
We show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms.
arXiv Detail & Related papers (2021-10-10T02:30:31Z) - Normalizing Flow based Hidden Markov Models for Classification of Speech
Phones with Explainability [25.543231171094384]
In pursuit of explainability, we develop generative models for sequential data.
We combine modern neural networks (normalizing flows) and traditional generative models (hidden Markov models - HMMs)
The proposed generative models can compute likelihood of a data and hence directly suitable for maximum-likelihood (ML) classification approach.
arXiv Detail & Related papers (2021-07-01T20:10:55Z) - Joint Optimization of an Autoencoder for Clustering and Embedding [22.16059261437617]
We present an alternative where the autoencoder and the clustering are learned simultaneously.
That simple neural network, referred to as the clustering module, can be integrated into a deep autoencoder resulting in a deep clustering model.
arXiv Detail & Related papers (2020-12-07T14:38:10Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.