Mixed data Deep Gaussian Mixture Model: A clustering model for mixed
datasets
- URL: http://arxiv.org/abs/2010.06661v2
- Date: Wed, 10 Mar 2021 18:20:58 GMT
- Title: Mixed data Deep Gaussian Mixture Model: A clustering model for mixed
datasets
- Authors: Robin Fuchs, Denys Pommeret, Cinzia Viroli
- Abstract summary: We introduce a model-based clustering method called Mixed Deep Gaussian Mixture Model (MDGMM)
This architecture is flexible and can be adapted to mixed as well as to continuous or non-continuous data.
Our model provides continuous low-dimensional representations of the data which can be a useful tool to visualize mixed datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Clustering mixed data presents numerous challenges inherent to the very
heterogeneous nature of the variables. A clustering algorithm should be able,
despite of this heterogeneity, to extract discriminant pieces of information
from the variables in order to design groups. In this work we introduce a
multilayer architecture model-based clustering method called Mixed Deep
Gaussian Mixture Model (MDGMM) that can be viewed as an automatic way to merge
the clustering performed separately on continuous and non-continuous data. This
architecture is flexible and can be adapted to mixed as well as to continuous
or non-continuous data. In this sense we generalize Generalized Linear Latent
Variable Models and Deep Gaussian Mixture Models. We also design a new
initialisation strategy and a data driven method that selects the best
specification of the model and the optimal number of clusters for a given
dataset "on the fly". Besides, our model provides continuous low-dimensional
representations of the data which can be a useful tool to visualize mixed
datasets. Finally, we validate the performance of our approach comparing its
results with state-of-the-art mixed data clustering models over several
commonly used datasets.
Related papers
- Finite Mixtures of Multivariate Poisson-Log Normal Factor Analyzers for
Clustering Count Data [0.8499685241219366]
A class of eight parsimonious mixture models based on the mixtures of factor analyzers model are introduced.
The proposed models are explored in the context of clustering discrete data arising from RNA sequencing studies.
arXiv Detail & Related papers (2023-11-13T21:23:15Z) - Hard Regularization to Prevent Deep Online Clustering Collapse without
Data Augmentation [65.268245109828]
Online deep clustering refers to the joint use of a feature extraction network and a clustering model to assign cluster labels to each new data point or batch as it is processed.
While faster and more versatile than offline methods, online clustering can easily reach the collapsed solution where the encoder maps all inputs to the same point and all are put into a single cluster.
We propose a method that does not require data augmentation, and that, differently from existing methods, regularizes the hard assignments.
arXiv Detail & Related papers (2023-03-29T08:23:26Z) - Sparse and geometry-aware generalisation of the mutual information for joint discriminative clustering and feature selection [19.066989850964756]
We introduce a discriminative clustering model trying to maximise a geometry-aware generalisation of the mutual information called GEMINI.
This algorithm avoids the burden of feature exploration and is easily scalable to high-dimensional data and large amounts of samples while only designing a discriminative clustering model.
Our results show that Sparse GEMINI is a competitive algorithm and has the ability to select relevant subsets of variables with respect to the clustering without using relevance criteria or prior hypotheses.
arXiv Detail & Related papers (2023-02-07T10:52:04Z) - Model Based Co-clustering of Mixed Numerical and Binary Data [0.0]
Co-clustering is a data mining technique used to extract the underlying block structure between the rows and columns of a data matrix.
In this article, we extend the latent block models based co-clustering to the case of mixed data.
arXiv Detail & Related papers (2022-12-22T14:16:08Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Time Series Clustering with an EM algorithm for Mixtures of Linear
Gaussian State Space Models [0.0]
We propose a novel model-based time series clustering method with mixtures of linear Gaussian state space models.
The proposed method uses a new expectation-maximization algorithm for the mixture model to estimate the model parameters.
Experiments on a simulated dataset demonstrate the effectiveness of the method in clustering, parameter estimation, and model selection.
arXiv Detail & Related papers (2022-08-25T07:41:23Z) - Personalized Federated Learning via Convex Clustering [72.15857783681658]
We propose a family of algorithms for personalized federated learning with locally convex user costs.
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized.
arXiv Detail & Related papers (2022-02-01T19:25:31Z) - Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning [72.9458277424712]
Mixture Model Auto-Encoders (MixMate) is a novel architecture that clusters data by performing inference on a generative model.
We show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms.
arXiv Detail & Related papers (2021-10-10T02:30:31Z) - Vine copula mixture models and clustering for non-Gaussian data [0.0]
We propose a novel vine copula mixture model for continuous data.
We show that the model-based clustering algorithm with vine copula mixture models outperforms the other model-based clustering techniques.
arXiv Detail & Related papers (2021-02-05T16:04:26Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.