Correlation Robust Influence Maximization
- URL: http://arxiv.org/abs/2010.14620v2
- Date: Tue, 22 Feb 2022 05:51:28 GMT
- Title: Correlation Robust Influence Maximization
- Authors: Louis Chen, Divya Padmanabhan, Chee Chin Lim, Karthik Natarajan
- Abstract summary: We propose a distributionally robust model for the influence problem.
We seek a seed set whose expected influence under the worst correlation is maximized.
We show that this worst-case influence can be efficiently computed.
- Score: 5.508091917582913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a distributionally robust model for the influence maximization
problem. Unlike the classic independent cascade model
\citep{kempe2003maximizing}, this model's diffusion process is adversarially
adapted to the choice of seed set. Hence, instead of optimizing under the
assumption that all influence relationships in the network are independent, we
seek a seed set whose expected influence under the worst correlation, i.e. the
"worst-case, expected influence", is maximized. We show that this worst-case
influence can be efficiently computed, and though the optimization is NP-hard,
a ($1 - 1/e$) approximation guarantee holds. We also analyze the structure to
the adversary's choice of diffusion process, and contrast with established
models. Beyond the key computational advantages, we also highlight the extent
to which the independence assumption may cost optimality, and provide insights
from numerical experiments comparing the adversarial and independent cascade
model.
Related papers
- Soft Preference Optimization: Aligning Language Models to Expert Distributions [40.84391304598521]
SPO is a method for aligning generative models, such as Large Language Models (LLMs), with human preferences.
SPO integrates preference loss with a regularization term across the model's entire output distribution.
We showcase SPO's methodology, its theoretical foundation, and its comparative advantages in simplicity, computational efficiency, and alignment precision.
arXiv Detail & Related papers (2024-04-30T19:48:55Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Jointly Complementary&Competitive Influence Maximization with Concurrent Ally-Boosting and Rival-Preventing [12.270411279495097]
C$2$IC model considers both complementary and competitive influence spread comprehensively under multi-agent environment.
We show the problem is NP-hard and can generalize the influence boosting problem and the influence blocking problem.
We conduct extensive experiments on real social networks and the experimental results demonstrate the effectiveness of the proposed algorithms.
arXiv Detail & Related papers (2023-02-19T16:41:53Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Break The Spell Of Total Correlation In betaTCVAE [4.38301148531795]
This paper proposes a new iterative decomposition path of total correlation and explains the disentangled representation ability of VAE.
The novel model enables VAE to adjust the parameter capacity to divide dependent and independent data features flexibly.
arXiv Detail & Related papers (2022-10-17T07:16:53Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z) - Robust Bayesian Inference for Discrete Outcomes with the Total Variation
Distance [5.139874302398955]
Models of discrete-valued outcomes are easily misspecified if the data exhibit zero-inflation, overdispersion or contamination.
Here, we introduce a robust discrepancy-based Bayesian approach using the Total Variation Distance (TVD)
We empirically demonstrate that our approach is robust and significantly improves predictive performance on a range of simulated and real world data.
arXiv Detail & Related papers (2020-10-26T09:53:06Z) - Decomposed Adversarial Learned Inference [118.27187231452852]
We propose a novel approach, Decomposed Adversarial Learned Inference (DALI)
DALI explicitly matches prior and conditional distributions in both data and code spaces.
We validate the effectiveness of DALI on the MNIST, CIFAR-10, and CelebA datasets.
arXiv Detail & Related papers (2020-04-21T20:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.