Maximizing Influence with Graph Neural Networks
- URL: http://arxiv.org/abs/2108.04623v7
- Date: Sat, 14 Oct 2023 12:05:31 GMT
- Title: Maximizing Influence with Graph Neural Networks
- Authors: George Panagopoulos, Nikolaos Tziortziotis, Michalis Vazirgiannis,
Fragkiskos D. Malliaros
- Abstract summary: textscGlie is a graph neural network that learns to estimate the influence spread of the independent cascade.
textscGlie relies on a theoretical upper bound that is tightened through supervised training.
We develop a provably submodular influence spread based on textscGlie's representations to rank nodes while building the seed set adaptively.
- Score: 23.896176168370996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding the seed set that maximizes the influence spread over a network is a
well-known NP-hard problem. Though a greedy algorithm can provide near-optimal
solutions, the subproblem of influence estimation renders the solutions
inefficient. In this work, we propose \textsc{Glie}, a graph neural network
that learns how to estimate the influence spread of the independent cascade.
\textsc{Glie} relies on a theoretical upper bound that is tightened through
supervised training. Experiments indicate that it provides accurate influence
estimation for real graphs up to 10 times larger than the train set.
Subsequently, we incorporate it into two influence maximization techniques. We
first utilize Cost Effective Lazy Forward optimization substituting Monte Carlo
simulations with \textsc{Glie}, surpassing the benchmarks albeit with a
computational overhead. To improve computational efficiency we develop a
provably submodular influence spread based on \textsc{Glie}'s representations,
to rank nodes while building the seed set adaptively. The proposed algorithms
are inductive, meaning they are trained on graphs with less than 300 nodes and
up to 5 seeds, and tested on graphs with millions of nodes and up to 200 seeds.
The final method exhibits the most promising combination of time efficiency and
influence quality, outperforming several baselines.
Related papers
- Graph-Dependent Regret Bounds in Multi-Armed Bandits with Interference [18.101059380671582]
We study multi-armed bandits under network interference.<n>This induces an exponentially large action space.<n>We propose a novel algorithm that uses the local graph structure to minimize regret.
arXiv Detail & Related papers (2025-03-10T17:25:24Z) - The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Unifews: Unified Entry-Wise Sparsification for Efficient Graph Neural Network [10.556366638048384]
Graph Neural Networks (GNNs) have shown promising performance in various graph learning tasks, but at the cost of resource-intensive computations.
Previous studies attempt to reduce the computational budget by leveraging graph-level or network-level sparsification techniques.
We propose Unifews, which unifies the two operations in an entry-wise manner considering individual matrix elements.
arXiv Detail & Related papers (2024-03-20T03:07:30Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks [30.19635147123557]
We propose a generative structure -- GFlowNets-based GNN Explainer (GFlowExplainer)
Our GFlowExplainer aims to learn a policy that generates a distribution of subgraphs for which the probability of a subgraph is proportional to its' reward.
We conduct extensive experiments on both synthetic and real datasets, and both qualitative and quantitative results show the superiority of our GFlowExplainer.
arXiv Detail & Related papers (2023-03-04T16:15:25Z) - Influence Maximization (IM) in Complex Networks with Limited Visibility
Using Statistical Methods [2.320417845168326]
influence literature (IM) problem is known as an NP-complete problem and does not have a solution in time.
Most of the methods proposed to improve this complexity are based on the assumption that the entire graph is visible.
This study is conducted to extend current methods with link prediction techniques to pseudo-visibility graphs.
arXiv Detail & Related papers (2022-08-28T07:55:54Z) - SCARA: Scalable Graph Neural Networks with Feature-Oriented Optimization [23.609017952951454]
We propose SCARA, a scalable Graph Neural Network (GNN) with feature-oriented optimization for graph computation.
SCARA efficiently computes graph embedding from node features, and further selects and reuses feature results to reduce overhead.
It is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111M nodes, 1.6B edges) in 100 seconds.
arXiv Detail & Related papers (2022-07-19T10:32:11Z) - GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence
Maximization [1.7311053765541482]
Influence (IM) is a problem of identifying a subset of nodes called the seed nodes in a network (graph)
IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks.
We develop a generic IM problem as a Markov decision process that handles both intrinsic and influence activations.
arXiv Detail & Related papers (2022-05-30T03:48:51Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.