Grain: Improving Data Efficiency of Graph Neural Networks via
Diversified Influence Maximization
- URL: http://arxiv.org/abs/2108.00219v1
- Date: Sat, 31 Jul 2021 11:39:00 GMT
- Title: Grain: Improving Data Efficiency of Graph Neural Networks via
Diversified Influence Maximization
- Authors: Wentao Zhang, Zhi Yang, Yexin Wang, Yu Shen, Yang Li, Liang Wang, Bin
Cui
- Abstract summary: Graph Neural Networks (GNNs) go beyond the models existing data selection methods are designed for.
We present Grain, an efficient framework that opens up a new perspective through connecting data selection in GNNs with social influence.
Empirical studies on public datasets demonstrate that Grain significantly improves both the performance and efficiency of data selection for GNNs.
- Score: 24.25156825467544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data selection methods, such as active learning and core-set selection, are
useful tools for improving the data efficiency of deep learning models on
large-scale datasets. However, recent deep learning models have moved forward
from independent and identically distributed data to graph-structured data,
such as social networks, e-commerce user-item graphs, and knowledge graphs.
This evolution has led to the emergence of Graph Neural Networks (GNNs) that go
beyond the models existing data selection methods are designed for. Therefore,
we present Grain, an efficient framework that opens up a new perspective
through connecting data selection in GNNs with social influence maximization.
By exploiting the common patterns of GNNs, Grain introduces a novel feature
propagation concept, a diversified influence maximization objective with novel
influence and diversity functions, and a greedy algorithm with an approximation
guarantee into a unified framework. Empirical studies on public datasets
demonstrate that Grain significantly improves both the performance and
efficiency of data selection (including active learning and core-set selection)
for GNNs. To the best of our knowledge, this is the first attempt to bridge two
largely parallel threads of research, data selection, and social influence
maximization, in the setting of GNNs, paving new ways for improving data
efficiency.
Related papers
- Neighborhood-Order Learning Graph Attention Network for Fake News Detection [2.34863357088666]
We propose a novel model called Neighborhood-Order Learning Graph Attention Network (NOL-GAT) for fake news detection.
This model allows each node in each layer to independently learn its optimal neighborhood order.
To evaluate the model's performance, experiments are conducted on various fake news datasets.
arXiv Detail & Related papers (2025-02-10T18:51:57Z) - TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Loss-aware Curriculum Learning for Heterogeneous Graph Neural Networks [30.333265803394998]
This paper investigates the application of curriculum learning techniques to improve the performance of Heterogeneous Graph Neural Networks (GNNs)
To better classify the quality of the data, we design a loss-aware training schedule, named LTS, that measures the quality of every nodes of the data.
Our findings demonstrate the efficacy of curriculum learning in enhancing HGNNs capabilities for analyzing complex graph-structured data.
arXiv Detail & Related papers (2024-02-29T05:44:41Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Ensemble Learning for Graph Neural Networks [28.3650473174488]
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data.
This paper investigates the application of ensemble learning techniques to improve the performance and robustness of GNNs.
arXiv Detail & Related papers (2023-10-22T03:55:13Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Revisiting Embeddings for Graph Neural Networks [0.0]
We explore different embedding extraction techniques for both images and texts.
We find that the choice of embedding biases the performance of different GNN architectures.
We propose Graph-connected Network (GraNet) layers which use GNN message passing within large models to allow neighborhood aggregation.
arXiv Detail & Related papers (2022-09-19T20:37:55Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.