ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2210.07500v3
- Date: Fri, 28 Apr 2023 19:58:16 GMT
- Title: ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep
Reinforcement Learning
- Authors: Tiantian Chen, Siwen Yan, Jianxiong Guo, Weili Wu
- Abstract summary: We propose a novel end-to-end DRL framework, ToupleGDD, to address the Influence Maximization (IM) problem.
Our model is trained on several small randomly generated graphs with a small budget, and tested on completely different networks under various large budgets.
- Score: 4.266866385061998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming at selecting a small subset of nodes with maximum influence on
networks, the Influence Maximization (IM) problem has been extensively studied.
Since it is #P-hard to compute the influence spread given a seed set, the
state-of-the-art methods, including heuristic and approximation algorithms,
faced with great difficulties such as theoretical guarantee, time efficiency,
generalization, etc. This makes it unable to adapt to large-scale networks and
more complex applications. On the other side, with the latest achievements of
Deep Reinforcement Learning (DRL) in artificial intelligence and other fields,
lots of works have been focused on exploiting DRL to solve combinatorial
optimization problems. Inspired by this, we propose a novel end-to-end DRL
framework, ToupleGDD, to address the IM problem in this paper, which
incorporates three coupled graph neural networks for network embedding and
double deep Q-networks for parameters learning. Previous efforts to solve IM
problem with DRL trained their models on subgraphs of the whole network, and
then tested on the whole graph, which makes the performance of their models
unstable among different networks. However, our model is trained on several
small randomly generated graphs with a small budget, and tested on completely
different networks under various large budgets, which can obtain results very
close to IMM and better results than OPIM-C on several datasets, and shows
strong generalization ability. Finally, we conduct a large number of
experiments on synthetic and realistic datasets, and experimental results prove
the effectiveness and superiority of our model.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - Optimizing cnn-Bigru performance: Mish activation and comparative analysis with Relu [0.0]
Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data.
This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
arXiv Detail & Related papers (2024-05-30T21:48:56Z) - Differentiable Tree Search Network [14.972768001402898]
Differentiable Tree Search Network (D-TSN) is a novel neural network architecture that significantly strengthens the inductive bias.
D-TSN employs a learned world model to conduct a fully differentiable online search.
We demonstrate that D-TSN outperforms popular model-free and model-based baselines.
arXiv Detail & Related papers (2024-01-22T02:33:38Z) - Finding Influencers in Complex Networks: An Effective Deep Reinforcement
Learning Approach [13.439099770154952]
We propose an effective reinforcement learning model that achieves superior performances over traditional best influence algorithms.
Specifically, we design an end-to-end learning framework that combines graph neural network algorithms as the encoder and reinforcement learning as the decoder, named DREIM.
arXiv Detail & Related papers (2023-09-09T14:19:00Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Deep Graph Representation Learning and Optimization for Influence
Maximization [10.90744025490539]
In Influence (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
We propose a novel framework DeepIM to generatively characterize the latent representation of seed sets.
We also design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints.
arXiv Detail & Related papers (2023-05-01T15:45:01Z) - Multiobjective Evolutionary Pruning of Deep Neural Networks with
Transfer Learning for improving their Performance and Robustness [15.29595828816055]
This work proposes MO-EvoPruneDeepTL, a multi-objective evolutionary pruning algorithm.
We use Transfer Learning to adapt the last layers of Deep Neural Networks, by replacing them with sparse layers evolved by a genetic algorithm.
Experiments show that our proposal achieves promising results in all the objectives, and direct relation are presented.
arXiv Detail & Related papers (2023-02-20T19:33:38Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.