A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces
- URL: http://arxiv.org/abs/2105.12208v1
- Date: Tue, 25 May 2021 20:44:28 GMT
- Title: A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces
- Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, and Brian Summa
- Abstract summary: We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
- Score: 7.717214217542406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persistence diagrams have been widely used to quantify the underlying
features of filtered topological spaces in data visualization. In many
applications, computing distances between diagrams is essential; however,
computing these distances has been challenging due to the computational cost.
In this paper, we propose a persistence diagram hashing framework that learns a
binary code representation of persistence diagrams, which allows for fast
computation of distances. This framework is built upon a generative adversarial
network (GAN) with a diagram distance loss function to steer the learning
process. Instead of attempting to transform diagrams into vectorized
representations, we hash diagrams into binary codes, which have natural
advantages in large-scale tasks. The training of this model is domain-oblivious
in that it can be computed purely from synthetic, randomly created diagrams. As
a consequence, our proposed method is directly applicable to various datasets
without the need of retraining the model. These binary codes, when compared
using fast Hamming distance, better maintain topological similarity properties
between datasets than other vectorized representations. To evaluate this
method, we apply our framework to the problem of diagram clustering and we
compare the quality and performance of our approach to the state-of-the-art. In
addition, we show the scalability of our approach on a dataset with 10k
persistence diagrams, which is not possible with current techniques. Moreover,
our experimental results demonstrate that our method is significantly faster
with less memory usage, while retaining comparable or better quality
comparisons.
Related papers
- Topograph: An efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation [78.54656076915565]
Topological correctness plays a critical role in many image segmentation tasks.
Most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy.
We propose a novel, graph-based framework for topologically accurate image segmentation.
arXiv Detail & Related papers (2024-11-05T16:20:14Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Discrete transforms of quantized persistence diagrams [0.5249805590164902]
We introduce Qupid, a novel and simple method for vectorizing persistence diagrams.
Key features are the choice of log-scaled grids that emphasize information contained near the diagonal in persistence diagrams.
We conduct an in-depth experimental analysis of Qupid, showing that the simplicity of our method results in very low computational costs.
arXiv Detail & Related papers (2023-12-28T16:11:11Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - A simple way to learn metrics between attributed graphs [11.207372645301094]
We propose a new Simple Graph Metric Learning - SGML - model with few trainable parameters.
This model allows us to build an appropriate distance from a database of labeled (attributed) graphs to improve the performance of simple classification algorithms.
arXiv Detail & Related papers (2022-09-26T14:32:38Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Template based Graph Neural Network with Optimal Transport Distances [11.56532171513328]
Current Graph Neural Networks (GNN) architectures rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation.
This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance.
arXiv Detail & Related papers (2022-05-31T12:24:01Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Time-varying Graph Representation Learning via Higher-Order Skip-Gram
with Negative Sampling [0.456877715768796]
We build upon the fact that the skip-gram embedding approach implicitly performs a matrix factorization.
We show that higher-order skip-gram with negative sampling is able to disentangle the role of nodes and time.
We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned time-varying graph representations outperform state-of-the-art methods.
arXiv Detail & Related papers (2020-06-25T12:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.