Memory-based Message Passing: Decoupling the Message for Propogation
from Discrimination
- URL: http://arxiv.org/abs/2202.00423v1
- Date: Tue, 1 Feb 2022 14:15:32 GMT
- Title: Memory-based Message Passing: Decoupling the Message for Propogation
from Discrimination
- Authors: Jie Chen, Weiqi Liu, Jian Pu
- Abstract summary: Message passing is a fundamental procedure for graph neural networks (GNNs)
We propose a Memory-based Message Passing (MMP) method to decouple the message of each node into a self-embedding part for discrimination and a memory part for propagation.
Our MMP is a general skill that can work as an additional layer to help improve traditional GNNs performance.
- Score: 6.7605701314795095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Message passing is a fundamental procedure for graph neural networks in the
field of graph representation learning. Based on the homophily assumption, the
current message passing always aggregates features of connected nodes, such as
the graph Laplacian smoothing process. However, real-world graphs tend to be
noisy and/or non-smooth. The homophily assumption does not always hold, leading
to sub-optimal results. A revised message passing method needs to maintain each
node's discriminative ability when aggregating the message from neighbors. To
this end, we propose a Memory-based Message Passing (MMP) method to decouple
the message of each node into a self-embedding part for discrimination and a
memory part for propagation. Furthermore, we develop a control mechanism and a
decoupling regularization to control the ratio of absorbing and excluding the
message in the memory for each node. More importantly, our MMP is a general
skill that can work as an additional layer to help improve traditional GNNs
performance. Extensive experiments on various datasets with different homophily
ratios demonstrate the effectiveness and robustness of the proposed method.
Related papers
- Partitioning Message Passing for Graph Fraud Detection [57.928658584067556]
Label imbalance and homophily-heterophily mixture are the fundamental problems encountered when applying Graph Neural Networks (GNNs) to Graph Fraud Detection (GFD) tasks.
Existing GNN-based GFD models are designed to augment graph structure to accommodate the inductive bias of GNNs towards homophily.
In our work, we argue that the key to applying GNNs for GFD is not to exclude but to em distinguish neighbors with different labels.
arXiv Detail & Related papers (2024-11-16T11:30:53Z) - Towards Dynamic Message Passing on Graphs [104.06474765596687]
We propose a novel dynamic message-passing mechanism for graph neural networks (GNNs)
It projects graph nodes and learnable pseudo nodes into a common space with measurable spatial relations between them.
With nodes moving in the space, their evolving relations facilitate flexible pathway construction for a dynamic message-passing process.
arXiv Detail & Related papers (2024-10-31T07:20:40Z) - Preventing Representational Rank Collapse in MPNNs by Splitting the Computational Graph [9.498398257062641]
We show that operating on multiple directed acyclic graphs always satisfies our condition and propose to obtain these by defining a strict partial ordering of the nodes.
We conduct comprehensive experiments that confirm the benefits of operating on multi-relational graphs to achieve more informative node representations.
arXiv Detail & Related papers (2024-09-17T19:16:03Z) - SF-GNN: Self Filter for Message Lossless Propagation in Deep Graph Neural Network [38.669815079957566]
Graph Neural Network (GNN) with the main idea of encoding graph structure information of graphs by propagation and aggregation has developed rapidly.
It achieved excellent performance in representation learning of multiple types of graphs such as homogeneous graphs, heterogeneous graphs, and more complex graphs like knowledge graphs.
For the phenomenon of performance degradation in deep GNNs, we propose a new perspective.
arXiv Detail & Related papers (2024-07-03T02:40:39Z) - Half-Hop: A graph upsampling approach for slowing down message passing [31.26080679115766]
We introduce a framework for improving learning in message passing neural networks.
Our approach essentially upsamples edges in the original graph by adding "slow nodes" at each edge.
Our method only modifies the input graph, making it plug-and-play and easy to use with existing models.
arXiv Detail & Related papers (2023-08-17T22:24:15Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Framelet Message Passing [2.479720095773358]
We propose a new message passing based on multiscale framelet transforms, called Framelet Message Passing.
It integrates framelet representation of neighbor nodes from multiple hops away in node message update.
We also propose a continuous message passing using neural ODE solvers.
arXiv Detail & Related papers (2023-02-28T17:56:19Z) - Ordered GNN: Ordering Message Passing to Deal with Heterophily and
Over-smoothing [24.86998128873837]
We propose to order the messages passing into the node representation, with specific blocks of neurons targeted for message passing within specific hops.
Experimental results on an extensive set of datasets show that our model can simultaneously achieve the state-of-the-art in both homophily and heterophily settings.
arXiv Detail & Related papers (2023-02-03T03:38:50Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Rethinking Space-Time Networks with Improved Memory Coverage for
Efficient Video Object Segmentation [68.45737688496654]
We establish correspondences directly between frames without re-encoding the mask features for every object.
With the correspondences, every node in the current query frame is inferred by aggregating features from the past in an associative fashion.
We validated that every memory node now has a chance to contribute, and experimentally showed that such diversified voting is beneficial to both memory efficiency and inference accuracy.
arXiv Detail & Related papers (2021-06-09T16:50:57Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.