FastCover: An Unsupervised Learning Framework for Multi-Hop Influence
Maximization in Social Networks
- URL: http://arxiv.org/abs/2111.00463v1
- Date: Sun, 31 Oct 2021 10:49:21 GMT
- Title: FastCover: An Unsupervised Learning Framework for Multi-Hop Influence
Maximization in Social Networks
- Authors: Runbo Ni, Xueyan Li, Fangqi Li, Xiaofeng Gao, Guihai Chen
- Abstract summary: Finding influential users in social networks is a fundamental problem with many possible useful applications.
In this paper, we reduce the problem of IM to a budget-constrained d-hop dominating set problem (kdDSP)
We propose a unified machine learning (ML) framework, FastCover, to solve kdDSP by learning an efficient greedy strategy in an unsupervised way.
FastCover determines the entire seed set from the nodes' scores computed with only one forward propagation of the GNN and has a time complexity quasi-linear in the graph size.
- Score: 39.86798194955807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding influential users in social networks is a fundamental problem with
many possible useful applications. Viewing the social network as a graph, the
influence of a set of users can be measured by the number of neighbors located
within a given number of hops in the network, where each hop marks a step of
influence diffusion. In this paper, we reduce the problem of IM to a
budget-constrained d-hop dominating set problem (kdDSP). We propose a unified
machine learning (ML) framework, FastCover, to solve kdDSP by learning an
efficient greedy strategy in an unsupervised way. As one critical component of
the framework, we devise a novel graph neural network (GNN) architecture, graph
reversed attention network (GRAT), that captures the diffusion process among
neighbors. Unlike most heuristic algorithms and concurrent ML frameworks for
combinatorial optimization problems, FastCover determines the entire seed set
from the nodes' scores computed with only one forward propagation of the GNN
and has a time complexity quasi-linear in the graph size. Experiments on
synthetic graphs and real-world social networks demonstrate that FastCover
finds solutions with better or comparable quality rendered by the concurrent
algorithms while achieving a speedup of over 1000x.
Related papers
- Towards a General GNN Framework for Combinatorial Optimization [14.257210124854863]
We introduce a novel GNN architecture which leverages a complex filter bank and localized attention mechanisms designed to solve CO problems on graphs.
We show how our method differentiates itself from prior GNN-based CO solvers and how it can be effectively applied to the maximum clique, minimum dominating set, and maximum cut problems.
arXiv Detail & Related papers (2024-05-31T00:02:07Z) - Decentralized Optimization in Time-Varying Networks with Arbitrary Delays [22.40154714677385]
We consider a decentralized optimization problem for networks affected by communication delays.
Examples of such networks include collaborative machine learning, sensor networks, and multi-agent systems.
To mimic communication delays, we add virtual non-computing nodes to the network, resulting in directed graphs.
arXiv Detail & Related papers (2024-05-29T20:51:38Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence
Maximization [1.7311053765541482]
Influence (IM) is a problem of identifying a subset of nodes called the seed nodes in a network (graph)
IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks.
We develop a generic IM problem as a Markov decision process that handles both intrinsic and influence activations.
arXiv Detail & Related papers (2022-05-30T03:48:51Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.