An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability
- URL: http://arxiv.org/abs/2203.15500v1
- Date: Tue, 29 Mar 2022 12:49:25 GMT
- Title: An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability
- Authors: Yupeng Chen and Zhiguo Wang and Xiaojing Shen
- Abstract summary: This letter considers the problem of network topology inference under the framework of partial observability.
We propose a novel unbiased estimator for the symmetric network topology with the Gaussian noise and the Laplacian combination rule.
An effective algorithm called network inference Gauss algorithm is developed to infer the network structure.
- Score: 16.60607849384252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network topology inference is a fundamental problem in many applications of
network science, such as locating the source of fake news, brain connectivity
networks detection, etc. Many real-world situations suffer from a critical
problem that only a limited part of observations are available. This letter
considers the problem of network topology inference under the framework of
partial observability. Based on the vector autoregressive model, we propose a
novel unbiased estimator for the symmetric network topology with the Gaussian
noise and the Laplacian combination rule. Theoretically, we prove that it
converges to the network combination matrix in probability. Furthermore, by
utilizing the Gaussian mixture model algorithm, an effective algorithm called
network inference Gauss algorithm is developed to infer the network structure.
Finally, compared with the state-of-the-art methods, numerical experiments
demonstrate the proposed algorithm enjoys better performance in the case of
small sample sizes.
Related papers
- Variational Inference: Posterior Threshold Improves Network Clustering Accuracy in Sparse Regimes [2.5782420501870296]
This paper proposes a simple way to improve the variational inference method by hard thresholding the posterior of the community assignment after each iteration.
We show that the proposed method converges and can accurately recover the true community labels, even when the average node degree of the network is bounded.
arXiv Detail & Related papers (2023-01-12T00:24:54Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Random Feature Approximation for Online Nonlinear Graph Topology
Identification [7.992550355579789]
We propose a kernel-based algorithm for graph topology estimation.
We exploit the fact that the real-world networks often exhibit sparse topologies.
The experiments conducted on real and synthetic data show that the proposed method outperforms its competitors.
arXiv Detail & Related papers (2021-10-19T12:48:12Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Compressive Sensing and Neural Networks from a Statistical Learning
Perspective [4.561032960211816]
We present a generalization error analysis for a class of neural networks suitable for sparse reconstruction from few linear measurements.
Under realistic conditions, the generalization error scales only logarithmically in the number of layers, and at most linear in number of measurements.
arXiv Detail & Related papers (2020-10-29T15:05:43Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Tractable Approximate Gaussian Inference for Bayesian Neural Networks [1.933681537640272]
We propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks.
The method has a computational complexity of $mathcalO(n)$ with respect to the number of parameters $n$, and the tests performed on regression and classification benchmarks confirm that, for a same network architecture, it matches the performance of existing methods relying on gradient backpropagation.
arXiv Detail & Related papers (2020-04-20T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.