Kernel Biclustering algorithm in Hilbert Spaces
- URL: http://arxiv.org/abs/2208.03675v1
- Date: Sun, 7 Aug 2022 08:41:46 GMT
- Title: Kernel Biclustering algorithm in Hilbert Spaces
- Authors: Marcos Matabuena, J.C Vidal, Oscar Hernan Madrid Padilla, Dino
Sejdinovic
- Abstract summary: We develop a new model-free biclustering algorithm in abstract spaces using the notions of energy distance and the maximum mean discrepancy.
The proposed method can learn more general and complex cluster shapes than most existing literature approaches.
Our results are similar to state-of-the-art methods in their optimal scenarios, assuming a proper kernel choice.
- Score: 8.303238963864885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biclustering algorithms partition data and covariates simultaneously,
providing new insights in several domains, such as analyzing gene expression to
discover new biological functions. This paper develops a new model-free
biclustering algorithm in abstract spaces using the notions of energy distance
(ED) and the maximum mean discrepancy (MMD) -- two distances between
probability distributions capable of handling complex data such as curves or
graphs. The proposed method can learn more general and complex cluster shapes
than most existing literature approaches, which usually focus on detecting mean
and variance differences. Although the biclustering configurations of our
approach are constrained to create disjoint structures at the datum and
covariate levels, the results are competitive. Our results are similar to
state-of-the-art methods in their optimal scenarios, assuming a proper kernel
choice, outperforming them when cluster differences are concentrated in
higher-order moments. The model's performance has been tested in several
situations that involve simulated and real-world datasets. Finally, new
theoretical consistency results are established using some tools of the theory
of optimal transport.
Related papers
- Maximum Likelihood Estimation on Stochastic Blockmodels for Directed Graph Clustering [22.421702511126373]
We formulate clustering as estimating underlying communities in the directed block model.
We introduce two efficient and interpretable directed clustering algorithms, a spectral clustering algorithm and a semidefinite programming based clustering algorithm.
arXiv Detail & Related papers (2024-03-28T15:47:13Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Semi-Supervised Clustering via Structural Entropy with Different
Constraints [30.215985625884922]
We present Semi-supervised clustering via Structural Entropy (SSE), a novel method that can incorporate different types of constraints from diverse sources to perform both partitioning and hierarchical clustering.
We evaluate SSE on nine clustering datasets and compare it with eleven semi-supervised partitioning and hierarchical clustering methods.
arXiv Detail & Related papers (2023-12-18T04:00:40Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Bregman Power k-Means for Clustering Exponential Family Data [11.434503492579477]
We bridge algorithmic advances to classical work on hard clustering under Bregman divergences.
The elegant properties of Bregman divergences allow us to maintain closed form updates in a simple and transparent algorithm.
We consider thorough empirical analyses on simulated experiments and a case study on rainfall data, finding that the proposed method outperforms existing peer methods in a variety of non-Gaussian data settings.
arXiv Detail & Related papers (2022-06-22T06:09:54Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Conjugate Mixture Models for Clustering Multimodal Data [24.640116037967985]
The problem of multimodal clustering arises whenever the data are gathered with several physically different sensors.
We show that multimodal clustering can be addressed within a novel framework, namely conjugate mixture models.
arXiv Detail & Related papers (2020-12-09T10:13:22Z) - Scalable Hierarchical Agglomerative Clustering [65.66407726145619]
Existing scalable hierarchical clustering methods sacrifice quality for speed.
We present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.
arXiv Detail & Related papers (2020-10-22T15:58:35Z) - Biclustering with Alternating K-Means [5.089110111757978]
We provide a new formulation of the biclustering problem based on the idea of minimizing the empirical clustering risk.
We propose a simple and novel algorithm that finds a local minimum by alternating the use of an adapted version of the k-means clustering algorithm between columns and rows.
The results demonstrate that our algorithm is able to detect meaningful structures in the data and outperform other competing biclustering methods in various settings and situations.
arXiv Detail & Related papers (2020-09-09T20:15:24Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.