Online Influence Maximization under Decreasing Cascade Model
- URL: http://arxiv.org/abs/2305.15428v1
- Date: Fri, 19 May 2023 07:38:36 GMT
- Title: Online Influence Maximization under Decreasing Cascade Model
- Authors: Fang Kong, Jize Xie, Baoxiang Wang, Tao Yao, Shuai Li
- Abstract summary: We study online influence (OIM) under a new model of decreasing cascade (DC)
In DC, the chance of an influence attempt being successful reduces with previous failures.
We propose the DC-UCB algorithm, which achieves a regret bound of the same order as the state-of-the-art works on the IC model.
- Score: 18.536030474361723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study online influence maximization (OIM) under a new model of decreasing
cascade (DC). This model is a generalization of the independent cascade (IC)
model by considering the common phenomenon of market saturation. In DC, the
chance of an influence attempt being successful reduces with previous failures.
The effect is neglected by previous OIM works under IC and linear threshold
models. We propose the DC-UCB algorithm to solve this problem, which achieves a
regret bound of the same order as the state-of-the-art works on the IC model.
Extensive experiments on both synthetic and real datasets show the
effectiveness of our algorithm.
Related papers
- Revisiting Catastrophic Forgetting in Large Language Model Tuning [79.70722658190097]
Catastrophic Forgetting (CF) means models forgetting previously acquired knowledge when learning new data.
This paper takes the first step to reveal the direct link between the flatness of the model loss landscape and the extent of CF in the field of large language models.
Experiments on three widely-used fine-tuning datasets, spanning different model scales, demonstrate the effectiveness of our method in alleviating CF.
arXiv Detail & Related papers (2024-06-07T11:09:13Z) - Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement Learning [11.679095516650593]
We propose the Any-step Dynamics Model (ADM) to mitigate the compounding error by reducing bootstrapping prediction to direct prediction.
ADM allows for the use of variable-length plans as inputs for predicting future states without frequent bootstrapping.
We design two algorithms, ADMPO-ON and ADMPO-OFF, which apply ADM in online and offline model-based frameworks.
arXiv Detail & Related papers (2024-05-27T10:33:53Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Jointly Complementary&Competitive Influence Maximization with Concurrent Ally-Boosting and Rival-Preventing [12.270411279495097]
C$2$IC model considers both complementary and competitive influence spread comprehensively under multi-agent environment.
We show the problem is NP-hard and can generalize the influence boosting problem and the influence blocking problem.
We conduct extensive experiments on real social networks and the experimental results demonstrate the effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2023-02-19T16:41:53Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Principled Pruning of Bayesian Neural Networks through Variational Free
Energy Minimization [2.3999111269325266]
We formulate and apply Bayesian model reduction to perform principled pruning of Bayesian neural networks.
A novel iterative pruning algorithm is presented to alleviate the problems arising with naive Bayesian model reduction.
Our experiments indicate better model performance in comparison to state-of-the-art pruning schemes.
arXiv Detail & Related papers (2022-10-17T14:34:42Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.