Inferring networks from time series: a neural approach
- URL: http://arxiv.org/abs/2303.18059v3
- Date: Wed, 1 Nov 2023 11:15:32 GMT
- Title: Inferring networks from time series: a neural approach
- Authors: Thomas Gaskin, Grigorios A. Pavliotis, Mark Girolami
- Abstract summary: We present a powerful computational method to infer large network adjacency matrices from time series data using a neural network.
We demonstrate our capabilities by inferring line failure locations in the British power grid from its response to a power cut.
- Score: 3.115375810642661
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network structures underlie the dynamics of many complex phenomena, from gene
regulation and foodwebs to power grids and social media. Yet, as they often
cannot be observed directly, their connectivities must be inferred from
observations of the dynamics to which they give rise. In this work we present a
powerful computational method to infer large network adjacency matrices from
time series data using a neural network, in order to provide uncertainty
quantification on the prediction in a manner that reflects both the degree to
which the inference problem is underdetermined as well as the noise on the
data. This is a feature that other approaches have hitherto been lacking. We
demonstrate our method's capabilities by inferring line failure locations in
the British power grid from its response to a power cut, providing probability
densities on each edge and allowing the use of hypothesis testing to make
meaningful probabilistic statements about the location of the cut. Our method
is significantly more accurate than both Markov-chain Monte Carlo sampling and
least squares regression on noisy data and when the problem is underdetermined,
while naturally extending to the case of non-linear dynamics, which we
demonstrate by learning an entire cost matrix for a non-linear model of
economic activity in Greater London. Not having been specifically engineered
for network inference, this method in fact represents a general parameter
estimation scheme that is applicable to any high-dimensional parameter space.
Related papers
- A theory of data variability in Neural Network Bayesian inference [0.70224924046445]
We provide a field-theoretic formalism which covers the generalization properties of infinitely wide networks.
We derive the generalization properties from the statistical properties of the input.
We show that data variability leads to a non-Gaussian action reminiscent of a ($varphi3+varphi4$)-theory.
arXiv Detail & Related papers (2023-07-31T14:11:32Z) - The Decimation Scheme for Symmetric Matrix Factorization [0.0]
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications.
We study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced.
We introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization.
arXiv Detail & Related papers (2023-07-31T10:53:45Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - On the ISS Property of the Gradient Flow for Single Hidden-Layer Neural
Networks with Linear Activations [0.0]
We investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation.
We show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized.
arXiv Detail & Related papers (2023-05-17T02:26:34Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Uncertainty Quantification of Locally Nonlinear Dynamical Systems using
Neural Networks [0.0]
In structural engineering, often a linear structure contains spatially local nonlinearities with uncertainty present in them.
A standard nonlinear solver for them with sampling-based methods for uncertainty quantification incurs significant computational cost.
In this paper, neural network, a recently popular tool for universal function approximation in the scientific machine learning community is used to estimate the pseudoforce.
arXiv Detail & Related papers (2020-08-11T09:30:47Z) - An algorithm for reconstruction of triangle-free linear dynamic networks
with verification of correctness [2.28438857884398]
We present a method that either exactly recovers the topology of a triangle-free network certifying its correctness or outputs a graph that is sparser than the topology of the actual network.
We prove that, even in the limit of infinite data, any reconstruction method is susceptible to inferring edges that do not exist in the true network.
arXiv Detail & Related papers (2020-03-05T19:10:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.