A scalable multi-step least squares method for network identification
with unknown disturbance topology
- URL: http://arxiv.org/abs/2106.07548v1
- Date: Mon, 14 Jun 2021 16:12:49 GMT
- Title: A scalable multi-step least squares method for network identification
with unknown disturbance topology
- Authors: Stefanie J.M. Fonken, Karthik R. Ramaswamy, Paul M.J. Van den Hof
- Abstract summary: We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identification methods for dynamic networks typically require prior knowledge
of the network and disturbance topology, and often rely on solving poorly
scalable non-convex optimization problems. While methods for estimating network
topology are available in the literature, less attention has been paid to
estimating the disturbance topology, i.e., the (spatial) noise correlation
structure and the noise rank. In this work we present an identification method
for dynamic networks, in which an estimation of the disturbance topology
precedes the identification of the full dynamic network with known network
topology. To this end we extend the multi-step Sequential Linear Regression and
Weighted Null Space Fitting methods to deal with reduced rank noise, and use
these methods to estimate the disturbance topology and the network dynamics. As
a result, we provide a multi-step least squares algorithm with parallel
computation capabilities and that rely only on explicit analytical solutions,
thereby avoiding the usual non-convex optimizations involved. Consequently we
consistently estimate dynamic networks of Box Jenkins model structure, while
keeping the computational burden low. We provide a consistency proof that
includes path-based data informativity conditions for allocation of excitation
signals in the experimental design. Numerical simulations performed on a
dynamic network with reduced rank noise clearly illustrate the potential of
this method.
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Backpropagation on Dynamical Networks [0.0]
We propose a network inference method based on the backpropagation through time (BPTT) algorithm commonly used to train recurrent neural networks.
An approximation of local node dynamics is first constructed using a neural network.
Freerun prediction performance with the resulting local models and weights was found to be comparable to the true system.
arXiv Detail & Related papers (2022-07-07T05:22:44Z) - Bayesian Inference of Stochastic Dynamical Networks [0.0]
This paper presents a novel method for learning network topology and internal dynamics.
It is compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
Our method achieves state-of-the-art performance compared with group sparse Bayesian learning (GSBL), BINGO, kernel-based methods, dynGENIE3, GENIE3 and ARNI.
arXiv Detail & Related papers (2022-06-02T03:22:34Z) - An Unbiased Symmetric Matrix Estimator for Topology Inference under
Partial Observability [16.60607849384252]
This letter considers the problem of network topology inference under the framework of partial observability.
We propose a novel unbiased estimator for the symmetric network topology with the Gaussian noise and the Laplacian combination rule.
An effective algorithm called network inference Gauss algorithm is developed to infer the network structure.
arXiv Detail & Related papers (2022-03-29T12:49:25Z) - Progressive Spatio-Temporal Graph Convolutional Network for
Skeleton-Based Human Action Recognition [97.14064057840089]
We propose a method to automatically find a compact and problem-specific network for graph convolutional networks in a progressive manner.
Experimental results on two datasets for skeleton-based human action recognition indicate that the proposed method has competitive or even better classification performance.
arXiv Detail & Related papers (2020-11-11T09:57:49Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Estimating Linear Dynamical Networks of Cyclostationary Processes [0.0]
We present a novel algorithm for guaranteed topology learning in networks excited by cyclostationary processes.
Unlike prior work, the framework applies to linear dynamic system with complex valued dependencies.
In the second part of the article, we analyze conditions for consistent topology learning for bidirected radial networks when a subset of the network is unobserved.
arXiv Detail & Related papers (2020-09-26T18:54:50Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.