Finding Influencers in Complex Networks: An Effective Deep Reinforcement
Learning Approach
- URL: http://arxiv.org/abs/2309.07153v1
- Date: Sat, 9 Sep 2023 14:19:00 GMT
- Title: Finding Influencers in Complex Networks: An Effective Deep Reinforcement
Learning Approach
- Authors: Changan Liu, Changjun Fan, and Zhongzhi Zhang
- Abstract summary: We propose an effective reinforcement learning model that achieves superior performances over traditional best influence algorithms.
Specifically, we design an end-to-end learning framework that combines graph neural network algorithms as the encoder and reinforcement learning as the decoder, named DREIM.
- Score: 13.439099770154952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Maximizing influences in complex networks is a practically important but
computationally challenging task for social network analysis, due to its NP-
hard nature. Most current approximation or heuristic methods either require
tremendous human design efforts or achieve unsatisfying balances between
effectiveness and efficiency. Recent machine learning attempts only focus on
speed but lack performance enhancement. In this paper, different from previous
attempts, we propose an effective deep reinforcement learning model that
achieves superior performances over traditional best influence maximization
algorithms. Specifically, we design an end-to-end learning framework that
combines graph neural network as the encoder and reinforcement learning as the
decoder, named DREIM. Trough extensive training on small synthetic graphs,
DREIM outperforms the state-of-the-art baseline methods on very large synthetic
and real-world networks on solution quality, and we also empirically show its
linear scalability with regard to the network size, which demonstrates its
superiority in solving this problem.
Related papers
- Beyond Pruning Criteria: The Dominant Role of Fine-Tuning and Adaptive Ratios in Neural Network Robustness [7.742297876120561]
Deep neural networks (DNNs) excel in tasks like image recognition and natural language processing.
Traditional pruning methods compromise the network's ability to withstand subtle perturbations.
This paper challenges the conventional emphasis on weight importance scoring as the primary determinant of a pruned network's performance.
arXiv Detail & Related papers (2024-10-19T18:35:52Z) - Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - The Simpler The Better: An Entropy-Based Importance Metric To Reduce Neural Networks' Depth [5.869633234882029]
We propose an efficiency strategy that leverages prior knowledge transferred by large models.
Simple but effective, we propose a method relying on an Entropy-bASed Importance mEtRic (EASIER) to reduce the depth of over-parametrized deep neural networks.
arXiv Detail & Related papers (2024-04-27T08:28:25Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Deep Fusion: Efficient Network Training via Pre-trained Initializations [3.9146761527401424]
We present Deep Fusion, an efficient approach to network training that leverages pre-trained initializations of smaller networks.
Our experiments show how Deep Fusion is a practical and effective approach that not only accelerates the training process but also reduces computational requirements.
We validate our theoretical framework, which guides the optimal use of Deep Fusion, showing that it significantly reduces both training time and resource consumption.
arXiv Detail & Related papers (2023-06-20T21:30:54Z) - ToupleGDD: A Fine-Designed Solution of Influence Maximization by Deep
Reinforcement Learning [4.266866385061998]
We propose a novel end-to-end DRL framework, ToupleGDD, to address the Influence Maximization (IM) problem.
Our model is trained on several small randomly generated graphs with a small budget, and tested on completely different networks under various large budgets.
arXiv Detail & Related papers (2022-10-14T03:56:53Z) - DDCNet: Deep Dilated Convolutional Neural Network for Dense Prediction [0.0]
A receptive field (ERF) and a higher resolution of spatial features within a network are essential for providing higher-resolution dense estimates.
We present a systemic approach to design network architectures that can provide a larger receptive field while maintaining a higher spatial feature resolution.
arXiv Detail & Related papers (2021-07-09T23:15:34Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.