GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets
- URL: http://arxiv.org/abs/2306.03447v1
- Date: Tue, 6 Jun 2023 07:00:24 GMT
- Title: GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets
- Authors: Shubham Gupta, Sahil Manchanda, Sayan Ranu, Srikanta Bedathur
- Abstract summary: Graph neural networks (GNNs) are built on the assumption of a static set of features characterizing each node in a graph.
In this work, we address limitations through a novel GNN framework called GRAFENNE.
We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests.
- Score: 19.71442902979904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs), in general, are built on the assumption of a
static set of features characterizing each node in a graph. This assumption is
often violated in practice. Existing methods partly address this issue through
feature imputation. However, these techniques (i) assume uniformity of feature
set across nodes, (ii) are transductive by nature, and (iii) fail to work when
features are added or removed over time. In this work, we address these
limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a
novel allotropic transformation on the original graph, wherein the nodes and
features are decoupled through a bipartite encoding. Through a carefully chosen
message passing framework on the allotropic transformation, we make the model
parameter size independent of the number of features and thereby inductive to
both unseen nodes and features. We prove that GRAFENNE is at least as
expressive as any of the existing message-passing GNNs in terms of
Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen
features does not come at the cost of expressivity. In addition, as
demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN
with high empirical efficacy and the ability to learn in continual fashion over
streaming feature sets.
Related papers
- Graph neural networks and non-commuting operators [4.912318087940015]
We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem.
Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs.
We derive a training procedure that provably enforces the stability of the resulting model.
arXiv Detail & Related papers (2024-11-06T21:17:14Z) - Contextualized Messages Boost Graph Representations [1.5178009359320295]
This paper investigates the ability of graph networks (GNNs) to process data that may be represented as graphs.
It shows that only a few GNNs are investigated across all levels of capability.
A mathematical discussion on the relationship between SIRGCN and widely used GNNs is laid out to put the contribution into context.
arXiv Detail & Related papers (2024-03-19T08:05:49Z) - TouchUp-G: Improving Feature Representation through Graph-Centric
Finetuning [37.318961625795204]
Graph Neural Networks (GNNs) have become the state-of-the-art approach for many high-impact, real-world graph applications.
For feature-rich graphs, a prevalent practice involves utilizing a PM directly to generate features.
This practice is suboptimal because the node features extracted from PM are graph-agnostic and prevent GNNs from fully utilizing the potential correlations between the graph structure and node features.
arXiv Detail & Related papers (2023-09-25T05:44:40Z) - Break the Wall Between Homophily and Heterophily for Graph
Representation Learning [25.445073413243925]
Homophily and heterophily are intrinsic properties of graphs that describe whether two linked nodes share similar properties.
This work identifies three graph features, including the ego node feature, the aggregated node feature, and the graph structure feature, that are essential for graph representation learning.
It proposes a new GNN model called OGNN that extracts all three graph features and adaptively fuses them to achieve generalizability across the whole spectrum of homophily.
arXiv Detail & Related papers (2022-10-08T19:37:03Z) - The Exact Class of Graph Functions Generated by Graph Neural Networks [43.25172578943894]
Graph Neural Network (GNN) whose output is identical to the graph function?
In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs.
We show that this condition can be efficiently verified by checking quadratically many constraints.
arXiv Detail & Related papers (2022-02-17T18:54:27Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - My Body is a Cage: the Role of Morphology in Graph-Based Incompatible
Control [65.77164390203396]
We present a series of ablations on existing methods that show that morphological information encoded in the graph does not improve their performance.
Motivated by the hypothesis that any benefits GNNs extract from the graph structure are outweighed by difficulties they create for message passing, we also propose Amorpheus.
arXiv Detail & Related papers (2020-10-05T08:37:11Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - CatGCN: Graph Convolutional Networks with Categorical Node Features [99.555850712725]
CatGCN is tailored for graph learning when the node features are categorical.
We train CatGCN in an end-to-end fashion and demonstrate it on semi-supervised node classification.
arXiv Detail & Related papers (2020-09-11T09:25:17Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.