GRIL: A $2$-parameter Persistence Based Vectorization for Machine
Learning
- URL: http://arxiv.org/abs/2304.04970v2
- Date: Fri, 30 Jun 2023 16:13:00 GMT
- Title: GRIL: A $2$-parameter Persistence Based Vectorization for Machine
Learning
- Authors: Cheng Xin, Soham Mukherjee, Shreyas N. Samaga, Tamal K. Dey
- Abstract summary: We introduce a novel vector representation called Generalized Rank Invariant Landscape (GRIL) for $2$- parameter persistence modules.
We show that this vector representation is $1$-Lipschitz stable and differentiable with respect to underlying filtration functions.
We also observe an increase in performance indicating that GRIL can capture additional features enriching Graph Neural Networks (GNNs)
- Score: 0.49703640686206074
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: $1$-parameter persistent homology, a cornerstone in Topological Data Analysis
(TDA), studies the evolution of topological features such as connected
components and cycles hidden in data. It has been applied to enhance the
representation power of deep learning models, such as Graph Neural Networks
(GNNs). To enrich the representations of topological features, here we propose
to study $2$-parameter persistence modules induced by bi-filtration functions.
In order to incorporate these representations into machine learning models, we
introduce a novel vector representation called Generalized Rank Invariant
Landscape (GRIL) for $2$-parameter persistence modules. We show that this
vector representation is $1$-Lipschitz stable and differentiable with respect
to underlying filtration functions and can be easily integrated into machine
learning models to augment encoding topological features. We present an
algorithm to compute the vector representation efficiently. We also test our
methods on synthetic and benchmark graph datasets, and compare the results with
previous vector representations of $1$-parameter and $2$-parameter persistence
modules. Further, we augment GNNs with GRIL features and observe an increase in
performance indicating that GRIL can capture additional features enriching
GNNs. We make the complete code for the proposed method available at
https://github.com/soham0209/mpml-graph.
Related papers
- Tensor-Fused Multi-View Graph Contrastive Learning [12.412040359604163]
Graph contrastive learning (GCL) has emerged as a promising approach to enhance graph neural networks' (GNNs) ability to learn rich representations from unlabeled graph-structured data.
Current GCL models face challenges with computational demands and limited feature utilization.
We propose TensorMV-GCL, a novel framework that integrates extended persistent homology with GCL representations and facilitates multi-scale feature extraction.
arXiv Detail & Related papers (2024-10-20T01:40:12Z) - Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs [8.095679736030146]
We investigate $mathcalFOC$NN, a fragment of first-order logic with two variables and counting quantifiers.
We propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.
Our results consistently demonstrate that R$2$-GNN with the graph transformation outperforms the baseline methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-03T00:33:24Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Simplifying approach to Node Classification in Graph Neural Networks [7.057970273958933]
We decouple the node feature aggregation step and depth of graph neural network, and empirically analyze how different aggregated features play a role in prediction performance.
We show that not all features generated via aggregation steps are useful, and often using these less informative features can be detrimental to the performance of the GNN model.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model achieves comparable or even higher accuracy than state-of-the-art GNN models.
arXiv Detail & Related papers (2021-11-12T14:53:22Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - For Manifold Learning, Deep Neural Networks can be Locality Sensitive
Hash Functions [14.347610075713412]
We show that neural representations can be viewed as LSH-like functions that map each input to an embedding.
An important consequence of this behavior is one-shot learning to unseen classes.
arXiv Detail & Related papers (2021-03-11T18:57:47Z) - Improving Robustness and Generality of NLP Models Using Disentangled
Representations [62.08794500431367]
Supervised neural networks first map an input $x$ to a single representation $z$, and then map $z$ to the output label $y$.
We present methods to improve robustness and generality of NLP models from the standpoint of disentangled representation learning.
We show that models trained with the proposed criteria provide better robustness and domain adaptation ability in a wide range of supervised learning tasks.
arXiv Detail & Related papers (2020-09-21T02:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.