Kernel-Based Models for Influence Maximization on Graphs based on
Gaussian Process Variance Minimization
- URL: http://arxiv.org/abs/2103.01575v1
- Date: Tue, 2 Mar 2021 08:55:34 GMT
- Title: Kernel-Based Models for Influence Maximization on Graphs based on
Gaussian Process Variance Minimization
- Authors: Salvatore Cuomo and Wolfgang Erb and Gabriele Santin
- Abstract summary: We introduce and investigate a novel model for influence (IM) on graphs.
Data-driven approaches can be applied to determine proper kernels for this IM model.
Compared to models in this field that rely on costly Monte-Carlo simulations, our model allows for a simple and cost-efficient update strategy.
- Score: 9.357483974291899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inference of novel knowledge, the discovery of hidden patterns, and the
uncovering of insights from large amounts of data from a multitude of sources
make Data Science (DS) to an art rather than just a mere scientific discipline.
The study and design of mathematical models able to analyze information
represents a central research topic in DS. In this work, we introduce and
investigate a novel model for influence maximization (IM) on graphs using ideas
from kernel-based approximation, Gaussian process regression, and the
minimization of a corresponding variance term. Data-driven approaches can be
applied to determine proper kernels for this IM model and machine learning
methodologies are adopted to tune the model parameters. Compared to stochastic
models in this field that rely on costly Monte-Carlo simulations, our model
allows for a simple and cost-efficient update strategy to compute optimal
influencing nodes on a graph. In several numerical experiments, we show the
properties and benefits of this new model.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Towards Learning Stochastic Population Models by Gradient Descent [0.0]
We show that simultaneous estimation of parameters and structure poses major challenges for optimization procedures.
We demonstrate accurate estimation of models but find that enforcing the inference of parsimonious, interpretable models drastically increases the difficulty.
arXiv Detail & Related papers (2024-04-10T14:38:58Z) - Sparse Graphical Linear Dynamical Systems [1.6635799895254402]
Time-series datasets are central in machine learning with applications in numerous fields of science and engineering.
This work proposes a novel approach to bridge the gap by introducing a joint graphical modeling framework.
We present DGLASSO, a new inference method within this framework that implements an efficient block alternating majorization-minimization algorithm.
arXiv Detail & Related papers (2023-07-06T14:10:02Z) - Interpretable and Scalable Graphical Models for Complex Spatio-temporal
Processes [3.469001874498102]
thesis focuses on data that has complex-temporal structure and on probabilistic graphical models that learn the structure in an interpretable and interpretable manner.
practical applications of the methodology are considered using real datasets.
This includes brain-connectivity analysis using data, space weather forecasting using solar imaging data, longitudinal analysis of public opinions using Twitter data, and mining of mental health related issues using TalkLife data.
arXiv Detail & Related papers (2023-01-15T05:39:30Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Adversarial Stein Training for Graph Energy Models [11.182263394122142]
We use an energy-based model (EBM) based on multi-channel graph neural networks (GNN) to learn permutation invariant unnormalized density functions on graphs.
We find that this approach achieves competitive results on graph generation compared to benchmark models.
arXiv Detail & Related papers (2021-08-30T03:55:18Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.