Tree Variational Autoencoders
- URL: http://arxiv.org/abs/2306.08984v3
- Date: Fri, 17 Nov 2023 13:14:58 GMT
- Title: Tree Variational Autoencoders
- Authors: Laura Manduchi, Moritz Vandenhirtz, Alain Ryser, Julia Vogt
- Abstract summary: We propose a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables.
TreeVAE hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data.
- Score: 5.992683455757179
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose Tree Variational Autoencoder (TreeVAE), a new generative
hierarchical clustering model that learns a flexible tree-based posterior
distribution over latent variables. TreeVAE hierarchically divides samples
according to their intrinsic characteristics, shedding light on hidden
structures in the data. It adapts its architecture to discover the optimal tree
for encoding dependencies between latent variables. The proposed tree-based
generative architecture enables lightweight conditional inference and improves
generative performance by utilizing specialized leaf decoders. We show that
TreeVAE uncovers underlying clusters in the data and finds meaningful
hierarchical relations between the different groups on a variety of datasets,
including real-world imaging data. We present empirically that TreeVAE provides
a more competitive log-likelihood lower bound than the sequential counterparts.
Finally, due to its generative nature, TreeVAE is able to generate new samples
from the discovered clusters via conditional sampling.
Related papers
- scTree: Discovering Cellular Hierarchies in the Presence of Batch Effects in scRNA-seq Data [12.01555110624794]
scTree corrects for batch effects while simultaneously learning a tree-structured data representation.
We show empirically on seven datasets that scTree discovers the underlying clusters of the data.
arXiv Detail & Related papers (2024-06-27T16:16:55Z) - ViTree: Single-path Neural Tree for Step-wise Interpretable Fine-grained
Visual Categorization [56.37520969273242]
We introduce ViTree, a novel approach for fine-grained visual categorization.
By traversing the tree paths, ViTree effectively selects patches from transformer-processed features to highlight informative local regions.
This patch and path selectivity enhances model interpretability of ViTree, enabling better insights into the model's inner workings.
arXiv Detail & Related papers (2024-01-30T14:32:25Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - Individualized and Global Feature Attributions for Gradient Boosted
Trees in the Presence of $\ell_2$ Regularization [0.0]
We propose Prediction Decomposition (PreDecomp), a novel individualized feature attribution for boosted trees when they are trained with $ell$ regularization.
We also propose TreeInner, a family of debiased global feature attributions defined in terms of the inner product between any individualized feature attribution and labels on out-sample data for each tree.
arXiv Detail & Related papers (2022-11-08T17:56:22Z) - Robustifying Algorithms of Learning Latent Trees with Vector Variables [92.18777020401484]
We present the sample complexities of Recursive Grouping (RG) and Chow-Liu Recursive Grouping (CLRG)
We robustify RG, CLRG, Neighbor Joining (NJ) and Spectral NJ (SNJ) by using the truncated inner product.
We derive the first known instance-dependent impossibility result for structure learning of latent trees.
arXiv Detail & Related papers (2021-06-02T01:37:52Z) - Visualizing hierarchies in scRNA-seq data using a density tree-biased
autoencoder [50.591267188664666]
We propose an approach for identifying a meaningful tree structure from high-dimensional scRNA-seq data.
We then introduce DTAE, a tree-biased autoencoder that emphasizes the tree structure of the data in low dimensional space.
arXiv Detail & Related papers (2021-02-11T08:48:48Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Rethinking Learnable Tree Filter for Generic Feature Transform [71.77463476808585]
Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation.
To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term.
For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and-whistles.
arXiv Detail & Related papers (2020-12-07T07:16:47Z) - Tensor Decompositions in Recursive Neural Networks for Tree-Structured
Data [12.069862650316262]
We introduce two new aggregation functions to encode structural knowledge from tree-structured data.
We test them on two tree classification tasks, showing the advantage of proposed models when tree outdegree increases.
arXiv Detail & Related papers (2020-06-18T15:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.