Learning the Structure of Large Networked Systems Obeying Conservation
Laws
- URL: http://arxiv.org/abs/2206.07083v1
- Date: Tue, 14 Jun 2022 18:16:52 GMT
- Title: Learning the Structure of Large Networked Systems Obeying Conservation
Laws
- Authors: Anirudh Rayas, Rajasekhar Anguluri, Gautam Dasarathy
- Abstract summary: Conservation laws in networked systems may be modeled as balance equations of the form $X = B* Y$.
In several practical systems, the network structure is often unknown and needs to be estimated from data.
We propose a new $ell_1$-regularized maximum likelihood estimator for this problem in the high-dimensional regime.
- Score: 5.86054250638667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many networked systems such as electric networks, the brain, and social
networks of opinion dynamics are known to obey conservation laws. Examples of
this phenomenon include the Kirchoff laws in electric networks and opinion
consensus in social networks. Conservation laws in networked systems may be
modeled as balance equations of the form $X = B^{*} Y$, where the sparsity
pattern of $B^{*}$ captures the connectivity of the network, and $Y, X \in
\mathbb{R}^p$ are vectors of "potentials" and "injected flows" at the nodes
respectively. The node potentials $Y$ cause flows across edges and the flows
$X$ injected at the nodes are extraneous to the network dynamics. In several
practical systems, the network structure is often unknown and needs to be
estimated from data. Towards this, one has access to samples of the node
potentials $Y$, but only the statistics of the node injections $X$. Motivated
by this important problem, we study the estimation of the sparsity structure of
the matrix $B^{*}$ from $n$ samples of $Y$ under the assumption that the node
injections $X$ follow a Gaussian distribution with a known covariance
$\Sigma_X$. We propose a new $\ell_{1}$-regularized maximum likelihood
estimator for this problem in the high-dimensional regime where the size of the
network $p$ is larger than sample size $n$. We show that this optimization
problem is convex in the objective and admits a unique solution. Under a new
mutual incoherence condition, we establish sufficient conditions on the triple
$(n,p,d)$ for which exact sparsity recovery of $B^{*}$ is possible with high
probability; $d$ is the degree of the graph. We also establish guarantees for
the recovery of $B^{*}$ in the element-wise maximum, Frobenius, and operator
norms. Finally, we complement these theoretical results with experimental
validation of the performance of the proposed estimator on synthetic and
real-world data.
Related papers
- Deep Neural Networks: Multi-Classification and Universal Approximation [0.0]
We demonstrate that a ReLU deep neural network with a width of $2$ and a depth of $2N+4M-1$ layers can achieve finite sample memorization for any dataset comprising $N$ elements.
We also provide depth estimates for approximating $W1,p$ functions and width estimates for approximating $Lp(Omega;mathbbRm)$ for $mgeq1$.
arXiv Detail & Related papers (2024-09-10T14:31:21Z) - Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Inferences on Mixing Probabilities and Ranking in Mixed-Membership
Models [5.992878098797828]
Network data is prevalent in numerous big data applications including economics and health networks.
In this paper, we model the network using the Degree-Corrected Mixed Membership (DCMM) model.
We derive novel finite-sample expansion for the $boldsymbolpi_i(k)$s which allows us to obtain distributions and confidence interval of the membership mixing probabilities and other related population quantities.
arXiv Detail & Related papers (2023-08-29T02:35:45Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - High-Dimensional Inference over Networks: Linear Convergence and
Statistical Guarantees [20.701475313495884]
We study a sparse linear regression over a network of agents, modeled as an undirected graph and no server node.
We analyze the convergence rate and statistical guarantees of a distributed projected gradient tracking-based algorithm.
arXiv Detail & Related papers (2022-01-21T01:26:08Z) - Distributed Sparse Feature Selection in Communication-Restricted
Networks [6.9257380648471765]
We propose and theoretically analyze a new distributed scheme for sparse linear regression and feature selection.
In order to infer the causal dimensions from the whole dataset, we propose a simple, yet effective method for information sharing in the network.
arXiv Detail & Related papers (2021-11-02T05:02:24Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Impact of Community Structure on Consensus Machine Learning [0.17188280334580192]
We study consensus machine learning over networks drawn from block models.
We find that there exists a critical level of community structure at which $tau_epsilon$ reaches a lower bound.
arXiv Detail & Related papers (2020-11-02T21:41:35Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.