Fractional Budget Allocation for Influence Maximization under General Marketing Strategies
- URL: http://arxiv.org/abs/2407.05669v1
- Date: Mon, 8 Jul 2024 07:09:11 GMT
- Title: Fractional Budget Allocation for Influence Maximization under General Marketing Strategies
- Authors: Akhil Bhimaraju, Eliot W. Robson, Lav R. Varshney, Abhishek K. Umrawal,
- Abstract summary: We consider the fractional influence problem, i.e., identifying users on a social network to be incentivized with potentially partial discounts.
The larger the discount given to a user, the higher the likelihood of its activation, who then attempts to activate its neighboring users.
Our goal is to devise efficient algorithms that assign initial discounts to the network's users to maximize the total number of activated users.
- Score: 15.985300892420137
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We consider the fractional influence maximization problem, i.e., identifying users on a social network to be incentivized with potentially partial discounts to maximize the influence on the network. The larger the discount given to a user, the higher the likelihood of its activation (adopting a new product or innovation), who then attempts to activate its neighboring users, causing a cascade effect of influence through the network. Our goal is to devise efficient algorithms that assign initial discounts to the network's users to maximize the total number of activated users at the end of the cascade, subject to a constraint on the total sum of discounts given. In general, the activation likelihood could be any non-decreasing function of the discount, whereas, our focus lies on the case when the activation likelihood is an affine function of the discount, potentially varying across different users. As this problem is shown to be NP-hard, we propose and analyze an efficient (1-1/e)-approximation algorithm. Furthermore, we run experiments on real-world social networks to show the performance and scalability of our method.
Related papers
- Sparsing Law: Towards Large Language Models with Greater Activation Sparsity [62.09617609556697]
Activation sparsity denotes the existence of substantial weakly-contributed elements within activation outputs that can be eliminated.
We propose PPL-$p%$ sparsity, a precise and performance-aware activation sparsity metric.
We show that ReLU is more efficient as the activation function than SiLU and can leverage more training data to improve activation sparsity.
arXiv Detail & Related papers (2024-11-04T17:59:04Z) - Influence Maximization via Graph Neural Bandits [54.45552721334886]
We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced.
We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced.
arXiv Detail & Related papers (2024-06-18T17:54:33Z) - GLANCE: Global Actions in a Nutshell for Counterfactual Explainability [10.25011737760687]
We introduce GLANCE, a versatile and adaptive framework, comprising two algorithms.
C-GLANCE employs a clustering approach that considers both the feature space and the space of counterfactual actions.
T-GLANCE provides additional features to enhance flexibility.
arXiv Detail & Related papers (2024-05-29T09:24:25Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood [86.70040320522432]
Neural network sparsification is a promising avenue to save computational time and memory costs.
We present Sparsifiability via the Marginal likelihood (SpaM), a pruning framework.
We demonstrate the effectiveness of our framework, especially at high sparsity levels.
arXiv Detail & Related papers (2024-02-25T03:48:13Z) - Influence Maximization with Unknown Individual Effect on General Network [3.4049427793086324]
The identification of a seed set to maximize information spread in a network is crucial, a concept known as Influence Maximization (IM)
IM algorithms could naturally extend to cases where each node is equipped with specific weight, referred to as individual effect, to measure the node's importance.
In our paper, we address this through the development of a Causal Influence Maximization (CauIM) algorithm.
arXiv Detail & Related papers (2023-01-28T15:34:03Z) - Efficient Neural Network Analysis with Sum-of-Infeasibilities [64.31536828511021]
Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on networks with extensive branching functions.
An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI.
arXiv Detail & Related papers (2022-03-19T15:05:09Z) - GAC: A Deep Reinforcement Learning Model Toward User Incentivization in
Unknown Social Networks [3.3946853660795884]
We propose an end-to-end reinforcement learning-based framework, named Geometric Actor-Critic (GAC), to discover effective incentive allocation policies.
We use three real-world social network datasets to evaluate the performance of the proposed GAC.
arXiv Detail & Related papers (2022-03-17T19:41:49Z) - Identifying Influential Users in Unknown Social Networks for Adaptive
Incentive Allocation Under Budget Restriction [24.793013471521924]
Incentivization has been proven to be a more proactive way to affect users' behaviors.
We propose a novel algorithm for exploring influential users in unknown networks.
We design an adaptive incentive allocation approach that determines incentive values based on users' preferences and their influence ability.
arXiv Detail & Related papers (2021-07-13T11:20:10Z) - Dynamic Knapsack Optimization Towards Efficient Multi-Channel Sequential
Advertising [52.3825928886714]
We formulate the sequential advertising strategy optimization as a dynamic knapsack problem.
We propose a theoretically guaranteed bilevel optimization framework, which significantly reduces the solution space of the original optimization space.
To improve the exploration efficiency of reinforcement learning, we also devise an effective action space reduction approach.
arXiv Detail & Related papers (2020-06-29T18:50:35Z) - Maximizing Cumulative User Engagement in Sequential Recommendation: An
Online Optimization Perspective [26.18096797120916]
It is often needed to tradeoff two potentially conflicting objectives, that is, pursuing higher immediate user engagement and encouraging user browsing.
We propose a flexible and practical framework to explicitly tradeoff longer user browsing length and high immediate user engagement.
This approach is deployed at a large E-commerce platform, achieved over 7% improvement of cumulative clicks.
arXiv Detail & Related papers (2020-06-02T09:02:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.