From Latent Graph to Latent Topology Inference: Differentiable Cell
Complex Module
- URL: http://arxiv.org/abs/2305.16174v2
- Date: Thu, 3 Aug 2023 13:46:09 GMT
- Title: From Latent Graph to Latent Topology Inference: Differentiable Cell
Complex Module
- Authors: Claudio Battiloro, Indro Spinelli, Lev Telyatnikov, Michael Bronstein,
Simone Scardapane, Paolo Di Lorenzo
- Abstract summary: Differentiable Cell Complex Module (DCM) is a novel learnable function that computes cell probabilities in the complex to improve the downstream task.
We show how to integrate DCM with cell complex message passing networks layers and train it in a end-to-end fashion.
Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques.
- Score: 21.383018558790674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks
(GNNs) on a given graph topology by dynamically learning it. However, most of
LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph
to rewire and can solely learn regular graph topologies. In the wake of the
success of Topological Deep Learning (TDL), we study Latent Topology Inference
(LTI) for learning higher-order cell complexes (with sparse and not regular
topology) describing multi-way interactions between data points. To this aim,
we introduce the Differentiable Cell Complex Module (DCM), a novel learnable
function that computes cell probabilities in the complex to improve the
downstream task. We show how to integrate DCM with cell complex message passing
networks layers and train it in a end-to-end fashion, thanks to a two-step
inference procedure that avoids an exhaustive search across all possible cells
in the input, thus maintaining scalability. Our model is tested on several
homophilic and heterophilic graph datasets and it is shown to outperform other
state-of-the-art techniques, offering significant improvements especially in
cases where an input graph is not provided.
Related papers
- Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Learning Cartesian Product Graphs with Laplacian Constraints [10.15283812819547]
We study the problem of learning Cartesian product graphs under Laplacian constraints.
We establish statistical consistency for the penalized maximum likelihood estimation.
We also extend our method for efficient joint graph learning and imputation in the presence of structural missing values.
arXiv Detail & Related papers (2024-02-12T22:48:30Z) - Tensor-view Topological Graph Neural Network [16.433092191206534]
Graph neural networks (GNNs) have recently gained growing attention in graph learning.
Existing GNNs only use local information from a very limited neighborhood around each node.
We propose a novel Topological Graph Neural Network (TTG-NN), a class of simple yet effective deep learning.
Real data experiments show that the proposed TTG-NN outperforms 20 state-of-the-art methods on various graph benchmarks.
arXiv Detail & Related papers (2024-01-22T14:55:01Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric
Polynomials [2.521781613847069]
Graph Neural Networks (GNNs) are promising approaches for circumventing real-world problems on graph-structured data.
These models usually have at least one of four fundamental limitations: over-smoothing, over-fitting, difficult to train, and strong homophily assumption.
We identify a set of key designs including (D1) dilated convolution, (D2) multi-channel learning, (D3) self-attention score, and (D4) sign factor to boost learning from different types.
We theoretically analyze the model and show that it can approximate various graph filters by adjusting the self-attention score
arXiv Detail & Related papers (2022-09-30T13:03:57Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.