Diffusion Source Identification on Networks with Statistical Confidence
- URL: http://arxiv.org/abs/2106.04800v1
- Date: Wed, 9 Jun 2021 04:21:03 GMT
- Title: Diffusion Source Identification on Networks with Statistical Confidence
- Authors: Quinlan Dawkins, Tianxi Li, Haifeng Xu
- Abstract summary: We introduce a statistical framework for the study of diffusion source identification and develop a confidence set inference approach inspired by hypothesis testing.
Our method efficiently produces a small subset of nodes, which provably covers the source node with any pre-specified confidence level.
This is the first diffusion source identification method with a practically useful theoretical guarantee on general networks.
- Score: 12.805268849262246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion source identification on networks is a problem of fundamental
importance in a broad class of applications, including rumor controlling and
virus identification. Though this problem has received significant recent
attention, most studies have focused only on very restrictive settings and lack
theoretical guarantees for more realistic networks. We introduce a statistical
framework for the study of diffusion source identification and develop a
confidence set inference approach inspired by hypothesis testing. Our method
efficiently produces a small subset of nodes, which provably covers the source
node with any pre-specified confidence level without restrictive assumptions on
network structures. Moreover, we propose multiple Monte Carlo strategies for
the inference procedure based on network topology and the probabilistic
properties that significantly improve the scalability. To our knowledge, this
is the first diffusion source identification method with a practically useful
theoretical guarantee on general networks. We demonstrate our approach via
extensive synthetic experiments on well-known random network models and a
mobility network between cities concerning the COVID-19 spreading.
Related papers
- An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Imitation-regularized Optimal Transport on Networks: Provable Robustness
and Application to Logistics Planning [4.943443725022745]
The I-OT solution demonstrated robustness in terms of the cost defined on the network.
We also examined the imitation and apriori risk information scenarios to demonstrate the usefulness and implications of the proposed method.
arXiv Detail & Related papers (2024-02-28T01:19:42Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Edgeworth expansions for network moments [20.058158445038824]
We present the first higher-order accurate approximation to the sampling CDF of a studentized network moment by Edgeworth expansion.
For sparse networks, our theory shows a simple normal approximation achieves a gradually depreciating Berry-Esseen bound as the network becomes sparser.
arXiv Detail & Related papers (2020-04-14T16:02:26Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.