Backward and Forward Inference in Interacting Independent-Cascade
Processes: A Scalable and Convergent Message-Passing Approach
- URL: http://arxiv.org/abs/2310.19138v1
- Date: Sun, 29 Oct 2023 20:03:38 GMT
- Title: Backward and Forward Inference in Interacting Independent-Cascade
Processes: A Scalable and Convergent Message-Passing Approach
- Authors: Nouman Khan, Kangle Mu, Mehrdad Moharrami, Vijay Subramanian
- Abstract summary: We study the problems of estimating the past and future evolutions of two diffusion processes that spread concurrently on a network.
We derive the exact joint probability of the initial state of the network and the observation-snapshot $mathcalO_n$.
- Score: 1.1470070927586018
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problems of estimating the past and future evolutions of two
diffusion processes that spread concurrently on a network. Specifically, given
a known network $G=(V, \overrightarrow{E})$ and a (possibly noisy) snapshot
$\mathcal{O}_n$ of its state taken at (a possibly unknown) time $W$, we wish to
determine the posterior distributions of the initial state of the network and
the infection times of its nodes. These distributions are useful in finding
source nodes of epidemics and rumors -- $\textit{backward inference}$ -- , and
estimating the spread of a fixed set of source nodes -- $\textit{forward
inference}$.
To model the interaction between the two processes, we study an extension of
the independent-cascade (IC) model where, when a node gets infected with either
process, its susceptibility to the other one changes. First, we derive the
exact joint probability of the initial state of the network and the
observation-snapshot $\mathcal{O}_n$. Then, using the machinery of
factor-graphs, factor-graph transformations, and the generalized
distributive-law, we derive a Belief-Propagation (BP) based algorithm that is
scalable to large networks and can converge on graphs of arbitrary topology (at
a likely expense in approximation accuracy).
Related papers
- Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Online Centralized Non-parametric Change-point Detection via Graph-based
Likelihood-ratio Estimation [77.81487285123147]
Consider each node of a graph to be generating a data stream that is synchronized and observed at near real-time.
At a change-point $tau$, a change occurs at a subset of nodes $C$, which affects the probability distribution of their associated node streams.
We propose a novel kernel-based method to both detect $tau$ and localize $C$, based on the direct estimation of the likelihood-ratio between the post-change and the pre-change distributions of the node streams.
arXiv Detail & Related papers (2023-01-08T10:15:24Z) - Generative Adversarial Learning of Sinkhorn Algorithm Initializations [0.0]
We show that meticulously training a neural network to learn initializations to the algorithm via the entropic OT dual problem can significantly speed up convergence.
We show that our network can even be used as a standalone OT solver to approximate regularized transport distances to a few percent error.
arXiv Detail & Related papers (2022-11-30T21:56:09Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Distributed gradient-based optimization in the presence of dependent
aperiodic communication [4.34720256795424]
Iterative distributed optimization algorithms involve multiple agents that communicate with each other, over time, in order to minimize/maximize a global objective.
In the presence of unreliable communication networks, the Age-of-Information (AoI), which measures the freshness of data received, may be large and hence hinder algorithmic convergence.
We show that convergence is guaranteed provided the random variables associated with the AoI processes areally dominated by a random variable with finite first moment.
arXiv Detail & Related papers (2022-01-27T06:44:04Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Reconsidering Dependency Networks from an Information Geometry
Perspective [2.6778110563115542]
Dependency networks are potential probabilistic graphical models for systems comprising a large number of variables.
The structure of a dependency network is represented by a directed graph, and each node has a conditional probability table.
We show that the dependency network and the Bayesian network have roughly the same performance in terms of the accuracy of their learned distributions.
arXiv Detail & Related papers (2021-07-02T07:05:11Z) - Probabilistic bounds on neuron death in deep rectifier networks [6.167486561517023]
Neuron death is a complex phenomenon with implications for model trainability.
In this work, we derive both upper and lower bounds on the probability that a ReLU network is to a trainable point.
We show that it is possible to increase the depth of a network indefinitely, so long as the width increases as well.
arXiv Detail & Related papers (2020-07-13T05:15:04Z) - Neural Enhanced Belief Propagation on Factor Graphs [85.61562052281688]
A graphical model is a structured representation of locally dependent random variables.
We first extend graph neural networks to factor graphs (FG-GNN)
We then propose a new hybrid model that runs conjointly a FG-GNN with belief propagation.
arXiv Detail & Related papers (2020-03-04T11:03:07Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.