Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach
- URL: http://arxiv.org/abs/2106.07039v1
- Date: Sun, 13 Jun 2021 16:42:22 GMT
- Title: Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach
- Authors: Haipeng Chen, Wei Qiu, Han-Ching Ou, Bo An, Milind Tambe
- Abstract summary: influence (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence.
In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called contingency-aware IM.
Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms.
- Score: 52.109536198330126
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The influence maximization (IM) problem aims at finding a subset of seed
nodes in a social network that maximize the spread of influence. In this study,
we focus on a sub-class of IM problems, where whether the nodes are willing to
be the seeds when being invited is uncertain, called contingency-aware IM. Such
contingency aware IM is critical for applications for non-profit organizations
in low resource communities (e.g., spreading awareness of disease prevention).
Despite the initial success, a major practical obstacle in promoting the
solutions to more communities is the tremendous runtime of the greedy
algorithms and the lack of high performance computing (HPC) for the non-profits
in the field -- whenever there is a new social network, the non-profits usually
do not have the HPCs to recalculate the solutions. Motivated by this and
inspired by the line of works that use reinforcement learning (RL) to address
combinatorial optimization on graphs, we formalize the problem as a Markov
Decision Process (MDP), and use RL to learn an IM policy over historically seen
networks, and generalize to unseen networks with negligible runtime at test
phase. To fully exploit the properties of our targeted problem, we propose two
technical innovations that improve the existing methods, including
state-abstraction and theoretically grounded reward shaping. Empirical results
show that our method achieves influence as high as the state-of-the-art methods
for contingency-aware IM, while having negligible runtime at test phase.
Related papers
- Joint Admission Control and Resource Allocation of Virtual Network Embedding via Hierarchical Deep Reinforcement Learning [69.00997996453842]
We propose a deep Reinforcement Learning approach to learn a joint Admission Control and Resource Allocation policy for virtual network embedding.
We show that HRL-ACRA outperforms state-of-the-art baselines in terms of both the acceptance ratio and long-term average revenue.
arXiv Detail & Related papers (2024-06-25T07:42:30Z) - Influence Maximization via Graph Neural Bandits [54.45552721334886]
We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced.
We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced.
arXiv Detail & Related papers (2024-06-18T17:54:33Z) - A Survey on Influence Maximization: From an ML-Based Combinatorial
Optimization [2.9882027965916413]
Influence Maximization (IM) is a classical optimization problem, which can be widely used in mobile networks, social computing, and recommendation systems.
Main challenge comes from the NP-hardness of the IM problem and #P-hardness of estimating the influence spread.
We focus on summarizing the relevant background knowledge, basic principles, common methods, and applied research.
arXiv Detail & Related papers (2022-11-06T10:13:42Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - A Community-Aware Framework for Social Influence Maximization [40.842329580198324]
We consider the Influence Maximization (IM) problem: 'if we can try to convince a subset of individuals in a social network to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target'
We propose a community-aware divide-and-conquer strategy that involves (i) learning the inherent community structure of the social network, (ii) generating candidate solutions by solving the influence problem for each community, and (iii) selecting the final set of individuals from the candidate solutions using a novel progressive budgeting scheme.
arXiv Detail & Related papers (2022-07-18T20:43:17Z) - GraMeR: Graph Meta Reinforcement Learning for Multi-Objective Influence
Maximization [1.7311053765541482]
Influence (IM) is a problem of identifying a subset of nodes called the seed nodes in a network (graph)
IM has numerous applications such as viral marketing, epidemic control, sensor placement and other network-related tasks.
We develop a generic IM problem as a Markov decision process that handles both intrinsic and influence activations.
arXiv Detail & Related papers (2022-05-30T03:48:51Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - CLAIM: Curriculum Learning Policy for Influence Maximization in Unknown
Social Networks [14.695979686066062]
We propose CLAIM - Curriculum LeArning Policy for Influence Maximization to improve the sample efficiency of RL methods.
We conduct experiments on real-world datasets and show that our approach can outperform the current best approach.
arXiv Detail & Related papers (2021-07-08T04:52:50Z) - Maximizing Social Welfare in a Competitive Diffusion Model [19.18481967353127]
UIC (IM) has garnered a lot of attention owing to applications such as viral marketing and infection containment.
It aims to select a small number of seed users to adopt an item such that adoption propagates to a large number of users in the network.
Existing works on competitive IM have several limitations.
arXiv Detail & Related papers (2020-12-06T19:09:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.