Network Inference and Influence Maximization from Samples
- URL: http://arxiv.org/abs/2106.03403v1
- Date: Mon, 7 Jun 2021 08:06:36 GMT
- Title: Network Inference and Influence Maximization from Samples
- Authors: Wei Chen, Xiaoming Sun, Jialin Zhang, Zhijie Zhang
- Abstract summary: We study the task of selecting a small number of seed nodes in a social network to maximize the spread of the influence from these seeds.
We provide a novel solution to the cascade to the network inference problem, that is, learning diffusion parameters and the network structure from the data.
Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn.
- Score: 20.916163957596577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Influence maximization is the task of selecting a small number of seed nodes
in a social network to maximize the spread of the influence from these seeds,
and it has been widely investigated in the past two decades. In the canonical
setting, the whole social network as well as its diffusion parameters is given
as input. In this paper, we consider the more realistic sampling setting where
the network is unknown and we only have a set of passively observed cascades
that record the set of activated nodes at each diffusion step. We study the
task of influence maximization from these cascade samples (IMS), and present
constant approximation algorithms for this task under mild conditions on the
seed set distribution. To achieve the optimization goal, we also provide a
novel solution to the network inference problem, that is, learning diffusion
parameters and the network structure from the cascade data. Comparing with
prior solutions, our network inference algorithm requires weaker assumptions
and does not rely on maximum-likelihood estimation and convex programming. Our
IMS algorithms enhance the learning-and-then-optimization approach by allowing
a constant approximation ratio even when the diffusion parameters are hard to
learn, and we do not need any assumption related to the network structure or
diffusion parameters.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - DSCom: A Data-Driven Self-Adaptive Community-Based Framework for
Influence Maximization in Social Networks [3.97535858363999]
We reformulate the problem on the attributed network and leverage the node attributes to estimate the closeness between connected nodes.
Specifically, we propose a machine learning-based framework, named DSCom, to address this problem.
Compared to the previous theoretical works, we carefully designed empirical experiments with parameterized diffusion models based on real-world social networks.
arXiv Detail & Related papers (2023-11-18T14:03:43Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Provably Efficient Reinforcement Learning for Online Adaptive Influence
Maximization [53.11458949694947]
We consider an adaptive version of content-dependent online influence problem where seed nodes are sequentially activated based on realtime feedback.
Our algorithm maintains a network model estimate and selects seed adaptively, exploring the social network while improving the optimal policy optimistically.
arXiv Detail & Related papers (2022-06-29T18:17:28Z) - PrEF: Percolation-based Evolutionary Framework for the
diffusion-source-localization problem in large networks [11.998014357342333]
We formulate a candidate set which contains the diffusion source for sure, and propose the method, Percolation-based Evolutionary Framework (PrEF) to minimize such set.
PrEF is developed based on the network percolation and evolutionary algorithm.
Our results show that the developed approach could achieve a much smaller candidate set compared to the state of the art in almost all cases.
arXiv Detail & Related papers (2022-05-16T02:15:14Z) - Learning Parameters for Balanced Index Influence Maximization [0.45119235878273]
We focus on a it Balance Index algorithm that relies on three parameters to tune its performance to the given network structure.
We create small snapshots from the given synthetic and large-scale real-world networks.
We train our machine-learning model on the snapshots and apply this model to the real-word network to find the best BI parameters.
arXiv Detail & Related papers (2020-12-15T03:30:54Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.