An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability
- URL: http://arxiv.org/abs/2203.15500v1
- Date: Tue, 29 Mar 2022 12:49:25 GMT
- Title: An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability
- Authors: Yupeng Chen and Zhiguo Wang and Xiaojing Shen
- Abstract summary: This letter considers the problem of network topology inference under the framework of partial observability.
We propose a novel unbiased estimator for the symmetric network topology with the Gaussian noise and the Laplacian combination rule.
An effective algorithm called network inference Gauss algorithm is developed to infer the network structure.
- Score: 16.60607849384252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network topology inference is a fundamental problem in many applications of
network science, such as locating the source of fake news, brain connectivity
networks detection, etc. Many real-world situations suffer from a critical
problem that only a limited part of observations are available. This letter
considers the problem of network topology inference under the framework of
partial observability. Based on the vector autoregressive model, we propose a
novel unbiased estimator for the symmetric network topology with the Gaussian
noise and the Laplacian combination rule. Theoretically, we prove that it
converges to the network combination matrix in probability. Furthermore, by
utilizing the Gaussian mixture model algorithm, an effective algorithm called
network inference Gauss algorithm is developed to infer the network structure.
Finally, compared with the state-of-the-art methods, numerical experiments
demonstrate the proposed algorithm enjoys better performance in the case of
small sample sizes.
Related papers
- A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Network Topology Inference from Smooth Signals Under Partial Observability [17.491343739469627]
Inferring network topologies from smooth signals with partially observed nodes is a significant problem in data science and engineering.
We propose a first-order algorithmic framework that includes two variants: one based on column sparsity regularization and the other on a low-rank constraint.
We establish theoretical convergence guarantees and demonstrate the linear convergence rate of our algorithms.
arXiv Detail & Related papers (2024-10-08T06:00:04Z) - Parallel-in-Time Solutions with Random Projection Neural Networks [0.07282584715927627]
This paper considers one of the fundamental parallel-in-time methods for the solution of ordinary differential equations, Parareal, and extends it by adopting a neural network as a coarse propagator.
We provide a theoretical analysis of the convergence properties of the proposed algorithm and show its effectiveness for several examples, including Lorenz and Burgers' equations.
arXiv Detail & Related papers (2024-08-19T07:32:41Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Tractable Approximate Gaussian Inference for Bayesian Neural Networks [1.933681537640272]
We propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks.
The method has a computational complexity of $mathcalO(n)$ with respect to the number of parameters $n$, and the tests performed on regression and classification benchmarks confirm that, for a same network architecture, it matches the performance of existing methods relying on gradient backpropagation.
arXiv Detail & Related papers (2020-04-20T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.