Deep Graph Representation Learning and Optimization for Influence
Maximization
- URL: http://arxiv.org/abs/2305.02200v2
- Date: Sat, 6 May 2023 15:02:48 GMT
- Title: Deep Graph Representation Learning and Optimization for Influence
Maximization
- Authors: Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song,
Meikang Qiu, Liang Zhao
- Abstract summary: In Influence (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
We propose a novel framework DeepIM to generatively characterize the latent representation of seed sets.
We also design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints.
- Score: 10.90744025490539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Influence maximization (IM) is formulated as selecting a set of initial users
from a social network to maximize the expected number of influenced users.
Researchers have made great progress in designing various traditional methods,
and their theoretical design and performance gain are close to a limit. In the
past few years, learning-based IM methods have emerged to achieve stronger
generalization ability to unknown graphs than traditional ones. However, the
development of learning-based IM methods is still limited by fundamental
obstacles, including 1) the difficulty of effectively solving the objective
function; 2) the difficulty of characterizing the diversified underlying
diffusion patterns; and 3) the difficulty of adapting the solution under
various node-centrality-constrained IM variants. To cope with the above
challenges, we design a novel framework DeepIM to generatively characterize the
latent representation of seed sets, and we propose to learn the diversified
information diffusion pattern in a data-driven and end-to-end manner. Finally,
we design a novel objective function to infer optimal seed sets under flexible
node-centrality-based budget constraints. Extensive analyses are conducted over
both synthetic and real-world datasets to demonstrate the overall performance
of DeepIM. The code and data are available at:
https://github.com/triplej0079/DeepIM.
Related papers
- Influence Maximization via Graph Neural Bandits [54.45552721334886]
We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced.
We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced.
arXiv Detail & Related papers (2024-06-18T17:54:33Z) - Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning [50.73666458313015]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications.
MoE has been emerged as a promising solution with its sparse architecture for effective task decoupling.
Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets.
arXiv Detail & Related papers (2024-04-13T12:14:58Z) - Many-Objective Evolutionary Influence Maximization: Balancing Spread, Budget, Fairness, and Time [3.195234044113248]
The Influence Maximization (IM) problem seeks to discover the set of nodes in a graph that can spread the information propagation at most.
This problem is known to be NP-hard, and it is usually studied by maximizing the influence (spread) and,Alternatively, optimizing a second objective.
In this work, we propose a first case study where several IM-specific objective functions, namely budget fairness, communities, and time, are optimized on top of influence and minimization of the seed set size.
arXiv Detail & Related papers (2024-03-27T16:54:45Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep
Reinforcement Learning [4.266866385061998]
We propose a novel end-to-end DRL framework, ToupleGDD, to address the Influence Maximization (IM) problem.
Our model is trained on several small randomly generated graphs with a small budget, and tested on completely different networks under various large budgets.
arXiv Detail & Related papers (2022-10-14T03:56:53Z) - GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence
Maximization [1.7311053765541482]
Influence (IM) is a problem of identifying a subset of nodes called the seed nodes in a network (graph)
IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks.
We develop a generic IM problem as a Markov decision process that handles both intrinsic and influence activations.
arXiv Detail & Related papers (2022-05-30T03:48:51Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.