Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)
- URL: http://arxiv.org/abs/2307.02509v2
- Date: Thu, 9 Nov 2023 12:57:18 GMT
- Title: Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)
- Authors: Mahieu Pont, Julien Tierny
- Abstract summary: This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE)
In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network.
Experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average.
- Score: 5.384630221560809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a computational framework for the Wasserstein
auto-encoding of merge trees (MT-WAE), a novel extension of the classical
auto-encoder neural network architecture to the Wasserstein metric space of
merge trees. In contrast to traditional auto-encoders which operate on
vectorized data, our formulation explicitly manipulates merge trees on their
associated metric space at each layer of the network, resulting in superior
accuracy and interpretability. Our novel neural network approach can be
interpreted as a non-linear generalization of previous linear attempts [79] at
merge tree encoding. It also trivially extends to persistence diagrams.
Extensive experiments on public ensembles demonstrate the efficiency of our
algorithms, with MT-WAE computations in the orders of minutes on average. We
show the utility of our contributions in two applications adapted from previous
work on merge tree encoding [79]. First, we apply MT-WAE to merge tree
compression, by concisely representing them with their coordinates in the final
layer of our auto-encoder. Second, we document an application to dimensionality
reduction, by exploiting the latent space of our auto-encoder, for the visual
analysis of ensemble data. We illustrate the versatility of our framework by
introducing two penalty terms, to help preserve in the latent space both the
Wasserstein distances between merge trees, as well as their clusters. In both
applications, quantitative experiments assess the relevance of our framework.
Finally, we provide a C++ implementation that can be used for reproducibility.
Related papers
- Rapid and Precise Topological Comparison with Merge Tree Neural Networks [7.443474354626664]
We introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison.
We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces.
We then formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism.
arXiv Detail & Related papers (2024-04-08T21:26:04Z) - Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering [48.0650332513417]
We introduce a novel model that incorporates an interpretable bottleneck-termed the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE)
This arrangement compels the VAE to attend on the most informative segments of the input signal.
By deliberately constraining the VAE with this FB, we promote the development of an encoding that is discernible, separable, and of reduced dimensionality.
arXiv Detail & Related papers (2023-10-18T13:06:05Z) - TreeFormer: a Semi-Supervised Transformer-based Framework for Tree
Counting from a Single High Resolution Image [6.789370732159176]
Tree density estimation and counting using single aerial and satellite images is a challenging task in photogrammetry and remote sensing.
We propose the first semisupervised transformer-based framework for tree counting which reduces the expensive tree annotations for remote sensing images.
Our model was evaluated on two benchmark tree counting datasets, Jiangsu, and Yosemite, as well as a new dataset, KCL-London, created by ourselves.
arXiv Detail & Related papers (2023-07-12T12:19:36Z) - Structure-Unified M-Tree Coding Solver for MathWord Problem [57.825176412485504]
In previous work, models designed by taking into account the properties of the binary tree structure of mathematical expressions at the output side have achieved better performance.
In this paper, we propose the Structure-Unified M-Tree Coding Coding (S-UMCr), which applies a tree with any M branches (M-tree) to unify the output structures.
Experimental results on the widely used MAWPS and Math23K datasets have demonstrated that SUMC-r not only outperforms several state-of-the-art models but also performs much better under low-resource conditions.
arXiv Detail & Related papers (2022-10-22T12:20:36Z) - PointTree: Transformation-Robust Point Cloud Encoder with Relaxed K-D
Trees [27.641101804012152]
We propose PointTree, a point cloud encoder that is robust to transformations based on relaxed K-D trees.
Key to our approach is the design of the division rule in K-D trees by using principal component analysis (PCA)
In addition to this novel architecture design, we further improve the introducing by pre-alignment.
arXiv Detail & Related papers (2022-08-11T17:59:09Z) - Principal Geodesic Analysis of Merge Trees (and Persistence Diagrams) [8.430851504111585]
We introduce an efficient, iterative algorithm which exploits shared-memory parallelism, as well as an analytic expression of the fitting energy gradient.
We show the utility of our contributions by extending to merge trees two typical PCA applications.
We present a dimensionality reduction framework exploiting the first two directions of the MT-PGA basis to generate two-dimensional layouts.
arXiv Detail & Related papers (2022-07-22T09:17:22Z) - Auto-Parsing Network for Image Captioning and Visual Question Answering [101.77688388554097]
We propose an Auto-Parsing Network (APN) to discover and exploit the input data's hidden tree structures.
Specifically, we impose a Probabilistic Graphical Model (PGM) parameterized by the attention operations on each self-attention layer to incorporate sparse assumption.
arXiv Detail & Related papers (2021-08-24T08:14:35Z) - Recursive Tree Grammar Autoencoders [3.791857415239352]
We propose a novel autoencoder approach that encodes trees via a bottom-up grammar and decodes trees via a tree grammar.
We show experimentally that our proposed method improves the autoencoding error, training time, and optimization score on four benchmark datasets.
arXiv Detail & Related papers (2020-12-03T17:37:25Z) - OctSqueeze: Octree-Structured Entropy Model for LiDAR Compression [77.8842824702423]
We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds.
Our method exploits the sparsity and structural redundancy between points to reduce the memory footprint.
Our algorithm can be used to reduce the onboard and offboard storage of LiDAR points for applications such as self-driving cars.
arXiv Detail & Related papers (2020-05-14T17:48:49Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.