Kernel KMeans clustering splits for end-to-end unsupervised decision
trees
- URL: http://arxiv.org/abs/2402.12232v1
- Date: Mon, 19 Feb 2024 15:39:39 GMT
- Title: Kernel KMeans clustering splits for end-to-end unsupervised decision
trees
- Authors: Louis Ohl, Pierre-Alexandre Mattei, Micka\"el Leclercq, Arnaud Droit,
Fr\'ed\'eric Precioso
- Abstract summary: We present a novel end-to-end trained unsupervised binary tree for clustering: Kauri.
This method performs a greedy maximisation of the kernel KMeans objective without requiring the definition of centroids.
For other kernels, Kauri often outperforms the concatenation of kernel KMeans and a CART decision tree.
- Score: 16.539007424389254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trees are convenient models for obtaining explainable predictions on
relatively small datasets. Although there are many proposals for the end-to-end
construction of such trees in supervised learning, learning a tree end-to-end
for clustering without labels remains an open challenge. As most works focus on
interpreting with trees the result of another clustering algorithm, we present
here a novel end-to-end trained unsupervised binary tree for clustering: Kauri.
This method performs a greedy maximisation of the kernel KMeans objective
without requiring the definition of centroids. We compare this model on
multiple datasets with recent unsupervised trees and show that Kauri performs
identically when using a linear kernel. For other kernels, Kauri often
outperforms the concatenation of kernel KMeans and a CART decision tree.
Related papers
- ODTE -- An ensemble of multi-class SVM-based oblique decision trees [0.7182449176083623]
ODTE is a new ensemble that uses oblique decision trees as base classifiers.
We introduce STree, the base algorithm for growing oblique decision trees.
ODTE ranks consistently above its competitors.
arXiv Detail & Related papers (2024-11-20T14:58:32Z) - Decision Trees for Interpretable Clusters in Mixture Models and Deep Representations [5.65604054654671]
We introduce the notion of an explainability-to-noise ratio for mixture models.
We propose an algorithm that takes as input a mixture model and constructs a suitable tree in data-independent time.
We prove upper and lower bounds on the error rate of the resulting decision tree.
arXiv Detail & Related papers (2024-11-03T14:00:20Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - Exact and Approximate Hierarchical Clustering Using A* [51.187990314731344]
We introduce a new approach based on A* search for clustering.
We overcome the prohibitively large search space by combining A* with a novel emphtrellis data structure.
We empirically demonstrate that our method achieves substantially higher quality results than baselines for a particle physics use case and other clustering benchmarks.
arXiv Detail & Related papers (2021-04-14T18:15:27Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - (Decision and regression) tree ensemble based kernels for regression and
classification [2.28438857884398]
Tree based ensembles such as Breiman's random forest (RF) and Gradient Boosted Trees (GBT) can be interpreted as implicit kernel generators.
We show that for continuous targets, the RF/GBT kernels are competitive to their respective ensembles in higher dimensional scenarios.
We provide the results from real life data sets for regression and classification to show how these insights may be leveraged in practice.
arXiv Detail & Related papers (2020-12-19T16:52:58Z) - Convex Polytope Trees [57.56078843831244]
convex polytope trees (CPT) are proposed to expand the family of decision trees by an interpretable generalization of their decision boundary.
We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given.
arXiv Detail & Related papers (2020-10-21T19:38:57Z) - From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical
Clustering [33.000371053304676]
We present the first continuous relaxation of Dasgupta's discrete optimization problem with provable quality guarantees.
We show that even approximate solutions found with gradient descent have superior quality than agglomerative clusterings.
We also highlight the flexibility of HypHC using end-to-end training in a downstream classification task.
arXiv Detail & Related papers (2020-10-01T13:43:19Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Tree Index: A New Cluster Evaluation Technique [2.790947019327459]
We introduce a cluster evaluation technique called Tree Index.
Our Tree Index is finding margins amongst clusters for easy learning without the complications of Minimum Description Length.
We show that, on the clustering results (obtained by various techniques) on a brain dataset, Tree Index discriminates between reasonable and non-sensible clusters.
arXiv Detail & Related papers (2020-03-24T13:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.