Understanding the Power of Persistence Pairing via Permutation Test
- URL: http://arxiv.org/abs/2001.06058v1
- Date: Thu, 16 Jan 2020 20:13:20 GMT
- Title: Understanding the Power of Persistence Pairing via Permutation Test
- Authors: Chen Cai, Yusu Wang
- Abstract summary: We carry out a range of experiments on both graph data and shape data, aiming to decouple and inspect the effects of different factors involved.
For graph classification tasks, we note that while persistence pairing yields consistent improvement over various benchmark datasets, most discriminative power comes from critical values.
For shape segmentation and classification, however, we note that persistence pairing shows significant power on most of the benchmark datasets.
- Score: 13.008323851750442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently many efforts have been made to incorporate persistence diagrams, one
of the major tools in topological data analysis (TDA), into machine learning
pipelines. To better understand the power and limitation of persistence
diagrams, we carry out a range of experiments on both graph data and shape
data, aiming to decouple and inspect the effects of different factors involved.
To this end, we also propose the so-called \emph{permutation test} for
persistence diagrams to delineate critical values and pairings of critical
values. For graph classification tasks, we note that while persistence pairing
yields consistent improvement over various benchmark datasets, it appears that
for various filtration functions tested, most discriminative power comes from
critical values. For shape segmentation and classification, however, we note
that persistence pairing shows significant power on most of the benchmark
datasets, and improves over both summaries based on merely critical values, and
those based on permutation tests. Our results help provide insights on when
persistence diagram based summaries could be more suitable.
Related papers
- Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - A Comprehensive Graph Pooling Benchmark: Effectiveness, Robustness and Generalizability [12.156602513449663]
We have constructed a comprehensive benchmark that includes 17 graph pooling methods and 28 different graph datasets.
This benchmark systematically assesses the performance of graph pooling methods in three dimensions, i.e., effectiveness, robustness, and generalizability.
arXiv Detail & Related papers (2024-06-13T12:04:40Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z) - Smart Vectorizations for Single and Multiparameter Persistence [8.504400925390296]
We introduce two new topological summaries for single and multi-persistence persistence, namely, saw functions and multi-persistence grid functions.
These new topological summaries can be regarded as the complexity measures of the evolving subspaces determined by the filtration.
We derive theoretical guarantees on the stability of the new saw and multi-persistence grid functions and illustrate their applicability for graph classification tasks.
arXiv Detail & Related papers (2021-04-10T15:09:31Z) - Catastrophic Forgetting in Deep Graph Networks: an Introductory
Benchmark for Graph Classification [12.423303337249795]
We study the phenomenon of catastrophic forgetting in the graph representation learning scenario.
We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization.
arXiv Detail & Related papers (2021-03-22T12:07:21Z) - Towards Understanding Sample Variance in Visually Grounded Language
Generation: Evaluations and Observations [67.4375210552593]
We design experiments to understand an important but often ignored problem in visually grounded language generation.
Given that humans have different utilities and visual attention, how will the sample variance in multi-reference datasets affect the models' performance?
We show that it is of paramount importance to report variance in experiments; that human-generated references could vary drastically in different datasets/tasks, revealing the nature of each task.
arXiv Detail & Related papers (2020-10-07T20:45:14Z) - Robust Persistence Diagrams using Reproducing Kernels [15.772439913138161]
We develop a framework for constructing robust persistence diagrams from superlevel filtrations of robust density estimators constructed using kernels.
We demonstrate the superiority of the proposed approach on benchmark datasets.
arXiv Detail & Related papers (2020-06-17T17:16:52Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.