Learning to maximize global influence from local observations
- URL: http://arxiv.org/abs/2109.11909v1
- Date: Fri, 24 Sep 2021 11:59:44 GMT
- Title: Learning to maximize global influence from local observations
- Authors: G\'abor Lugosi, Gergely Neu, Julia Olkhovskaya
- Abstract summary: We study a family online influence problems where in a sequence of rounds $t=1,ldots,T$, a decision maker selects one from a large number of agents with the goal of maximizing influence.
The goal of the decision maker is to select the sequence of agents in a way that the total number of influenced nodes in the network.
- Score: 12.611900695498218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a family online influence maximization problems where in a sequence
of rounds $t=1,\ldots,T$, a decision maker selects one from a large number of
agents with the goal of maximizing influence. Upon choosing an agent, the
decision maker shares a piece of information with the agent, which information
then spreads in an unobserved network over which the agents communicate. The
goal of the decision maker is to select the sequence of agents in a way that
the total number of influenced nodes in the network. In this work, we consider
a scenario where the networks are generated independently for each $t$
according to some fixed but unknown distribution, so that the set of influenced
nodes corresponds to the connected component of the random graph containing the
vertex corresponding to the selected agent. Furthermore, we assume that the
decision maker only has access to limited feedback: instead of making the
unrealistic assumption that the entire network is observable, we suppose that
the available feedback is generated based on a small neighborhood of the
selected vertex. Our results show that such partial local observations can be
sufficient for maximizing global influence. We model the underlying random
graph as a sparse inhomogeneous Erd\H{o}s--R\'enyi graph, and study three
specific families of random graph models in detail: stochastic block models,
Chung--Lu models and Kronecker random graphs. We show that in these cases one
may learn to maximize influence by merely observing the degree of the selected
vertex in the generated random graph. We propose sequential learning algorithms
that aim at maximizing influence, and provide their theoretical analysis in
both the subcritical and supercritical regimes of all considered models.
Related papers
- Influence Maximization via Graph Neural Bandits [54.45552721334886]
We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced.
We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced.
arXiv Detail & Related papers (2024-06-18T17:54:33Z) - Towards Generalizability of Multi-Agent Reinforcement Learning in Graphs with Recurrent Message Passing [0.9353820277714449]
In decentralized approaches, agents operate within a given graph and make decisions based on partial or outdated observations.
This work focuses on generalizability and resolves the trade-off in observed neighborhood size with a continuous information flow in the whole graph.
Our approach can be used in a decentralized manner at runtime and in combination with a reinforcement learning algorithm of choice.
arXiv Detail & Related papers (2024-02-07T16:53:09Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Graph Learning Across Data Silos [12.343382413705394]
We consider the problem of inferring graph topology from smooth graph signals in a novel but practical scenario.
Data are located in distributed clients and prohibited from leaving local clients due to factors such as privacy concerns.
We propose an auto-weighted multiple graph learning model to jointly learn a personalized graph for each local client and a single consensus graph for all clients.
arXiv Detail & Related papers (2023-01-17T02:14:57Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [77.14319095965058]
We formulate the OOD problem for node-level prediction on graphs.
We develop a new domain-invariant learning approach, named Explore-to-Extrapolate Risk Minimization.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Graph-wise Common Latent Factor Extraction for Unsupervised Graph
Representation Learning [40.70562886682939]
We propose a new principle for unsupervised graph representation learning: Graph-wise Common latent Factor EXtraction (GCFX)
GCFX explicitly extract common latent factors from an input graph and achieve improved results on downstream tasks to the current state-of-the-art.
Through extensive experiments and analysis, we demonstrate that GCFX is beneficial for graph-level tasks to alleviate distractions caused by local variations of individual nodes or local neighbourhoods.
arXiv Detail & Related papers (2021-12-16T12:22:49Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Influence Maximization Under Generic Threshold-based Non-submodular
Model [1.5780411262109524]
Concept of social influence is coined, where the goal is to select a number of most influential nodes (seed nodes) from a social network so that they can jointly trigger the maximal influence diffusion.
In this paper, we propose seed selection strategies using network graphical in a generalized threshold-based model, called influence barricade model, which is non-submodular.
To the best of our knowledge, this is the first graph-based approach that directly tackles non-submodular influence.
arXiv Detail & Related papers (2020-12-18T16:14:49Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.