Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks
- URL: http://arxiv.org/abs/2206.05499v1
- Date: Sat, 11 Jun 2022 11:04:23 GMT
- Title: Soft-mask: Adaptive Substructure Extractions for Graph Neural Networks
- Authors: Mingqi Yang, Yanming Shen, Heng Qi, Baocai Yin
- Abstract summary: A graph neural network should be able to efficiently extract task-relevant structures and be invariant to irrelevant parts.
In this work, we propose to learn graph representations from a sequence of subgraphs of the original graph to better capture task-relevant substructures or hierarchical structures and skip $noisy$ parts.
The soft-mask GNN layer is not limited by the fixed sample or drop ratio, and therefore is more flexible to extract subgraphs with arbitrary sizes.
- Score: 40.64326531965043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For learning graph representations, not all detailed structures within a
graph are relevant to the given graph tasks. Task-relevant structures can be
$localized$ or $sparse$ which are only involved in subgraphs or characterized
by the interactions of subgraphs (a hierarchical perspective). A graph neural
network should be able to efficiently extract task-relevant structures and be
invariant to irrelevant parts, which is challenging for general message passing
GNNs. In this work, we propose to learn graph representations from a sequence
of subgraphs of the original graph to better capture task-relevant
substructures or hierarchical structures and skip $noisy$ parts. To this end,
we design soft-mask GNN layer to extract desired subgraphs through the mask
mechanism. The soft-mask is defined in a continuous space to maintain the
differentiability and characterize the weights of different parts. Compared
with existing subgraph or hierarchical representation learning methods and
graph pooling operations, the soft-mask GNN layer is not limited by the fixed
sample or drop ratio, and therefore is more flexible to extract subgraphs with
arbitrary sizes. Extensive experiments on public graph benchmarks show that
soft-mask mechanism brings performance improvements. And it also provides
interpretability where visualizing the values of masks in each layer allows us
to have an insight into the structures learned by the model.
Related papers
- A Flexible, Equivariant Framework for Subgraph GNNs via Graph Products and Graph Coarsening [18.688057947275112]
Subgraph Graph Neural Networks (Subgraph GNNs) enhance the expressivity of message-passing GNNs by representing graphs as sets of subgraphs.
Previous approaches suggested processing only subsets of subgraphs, selected either randomly or via learnable sampling.
This paper introduces a new Subgraph GNNs framework to address these issues.
arXiv Detail & Related papers (2024-06-13T16:29:06Z) - SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - SSHPool: The Separated Subgraph-based Hierarchical Pooling [47.78745802682822]
We develop a novel local graph pooling method, namely the Separated Subgraph-based Hierarchical Pooling (SSH) for graph classification.
We individually employ the local graph convolution units as the local structure to further compress each subgraph into a coarsened node.
We develop an end-to-end GNN framework associated with the SSHPool module for graph classification.
arXiv Detail & Related papers (2024-03-24T13:03:35Z) - Stochastic Subgraph Neighborhood Pooling for Subgraph Classification [2.1270496914042996]
Subgraph Neighborhood Pooling (SSNP) jointly aggregates the subgraph and its neighborhood information without any computationally expensive operations such as labeling tricks.
Our experiments demonstrate that our models outperform current state-of-the-art methods (with a margin of up to 2%) while being up to 3X faster in training.
arXiv Detail & Related papers (2023-04-17T18:49:18Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.