Online Competitive Influence Maximization
- URL: http://arxiv.org/abs/2006.13411v4
- Date: Wed, 2 Mar 2022 06:37:24 GMT
- Title: Online Competitive Influence Maximization
- Authors: Jinhang Zuo, Xutong Liu, Carlee Joe-Wong, John C.S. Lui, Wei Chen
- Abstract summary: We introduce a new Online Competitive Influence Maximization (OCIM) problem, where two competing items propagate in the same network and influence probabilities on edges are unknown.
We adopt a multi-armed bandit (CMAB) framework for OCIM, but unlike the non-competitive setting, the important monotonicity property (influence spread increases when influence probabilities on edges increase) no longer holds due to the competitive nature of propagation.
- Score: 46.08146352395603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online influence maximization has attracted much attention as a way to
maximize influence spread through a social network while learning the values of
unknown network parameters. Most previous works focus on single-item diffusion.
In this paper, we introduce a new Online Competitive Influence Maximization
(OCIM) problem, where two competing items (e.g., products, news stories)
propagate in the same network and influence probabilities on edges are unknown.
We adopt a combinatorial multi-armed bandit (CMAB) framework for OCIM, but
unlike the non-competitive setting, the important monotonicity property
(influence spread increases when influence probabilities on edges increase) no
longer holds due to the competitive nature of propagation, which brings a
significant new challenge to the problem. We provide a nontrivial proof showing
that the Triggering Probability Modulated (TPM) condition for CMAB still holds
in OCIM, which is instrumental for our proposed algorithms OCIM-TS and OCIM-OFU
to achieve sublinear Bayesian and frequentist regret, respectively. We also
design an OCIM-ETC algorithm that requires less feedback and easier offline
computation, at the expense of a worse frequentist regret bound. Experimental
evaluations demonstrate the effectiveness of our algorithms.
Related papers
- Influence Maximization via Graph Neural Bandits [54.45552721334886]
We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced.
We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced.
arXiv Detail & Related papers (2024-06-18T17:54:33Z) - Weighted Sum-Rate Maximization With Causal Inference for Latent
Interference Estimation [9.569049935824227]
The paper extends the famous alternate optimization weighted minimum mean square error (WMMSE) under a causal framework to tackle with WSRM under latent interference.
Numerical results suggest the superiority of the SC-WMMSE over the original in both convergence and objective.
arXiv Detail & Related papers (2022-11-15T17:27:45Z) - Provably Efficient Reinforcement Learning for Online Adaptive Influence
Maximization [53.11458949694947]
We consider an adaptive version of content-dependent online influence problem where seed nodes are sequentially activated based on realtime feedback.
Our algorithm maintains a network model estimate and selects seed adaptively, exploring the social network while improving the optimal policy optimistically.
arXiv Detail & Related papers (2022-06-29T18:17:28Z) - Distributed Statistical Min-Max Learning in the Presence of Byzantine
Agents [34.46660729815201]
We consider a multi-agent min-max learning problem, and focus on the emerging challenge of contending with Byzantine adversarial agents.
Our main contribution is to provide a crisp analysis of the proposed robust extra-gradient algorithm for smooth convex-concave and smooth strongly convex-strongly concave functions.
Our rates are near-optimal, and reveal both the effect of adversarial corruption and the benefit of collaboration among the non-faulty agents.
arXiv Detail & Related papers (2022-04-07T03:36:28Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach [52.109536198330126]
influence (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence.
In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called contingency-aware IM.
Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms.
arXiv Detail & Related papers (2021-06-13T16:42:22Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.