Online Time-Varying Topology Identification via Prediction-Correction
Algorithms
- URL: http://arxiv.org/abs/2010.11634v2
- Date: Wed, 10 Feb 2021 15:45:53 GMT
- Title: Online Time-Varying Topology Identification via Prediction-Correction
Algorithms
- Authors: Alberto Natali, Mario Coutino, Elvin Isufi and Geert Leus
- Abstract summary: We propose a general-purpose online algorithm operating in non-stationary environments.
Because of its iteration-constrained nature, the proposed approach exhibits an intrinsic temporal-regularization of the graph topology without explicitly enforcing it.
- Score: 36.620113114806294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signal processing and machine learning algorithms for data supported over
graphs, require the knowledge of the graph topology. Unless this information is
given by the physics of the problem (e.g., water supply networks, power grids),
the topology has to be learned from data. Topology identification is a
challenging task, as the problem is often ill-posed, and becomes even harder
when the graph structure is time-varying. In this paper, we address the problem
of dynamic topology identification by building on recent results from
time-varying optimization, devising a general-purpose online algorithm
operating in non-stationary environments. Because of its iteration-constrained
nature, the proposed approach exhibits an intrinsic temporal-regularization of
the graph topology without explicitly enforcing it. As a case-study, we
specialize our method to the Gaussian graphical model (GGM) problem and
corroborate its performance.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - De Bruijn goes Neural: Causality-Aware Graph Neural Networks for Time
Series Data on Dynamic Graphs [1.2891210250935143]
We introduce De Bruijn Graph Neural Networks (DBGNNs) for time-resolved data on dynamic graphs.
Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs.
DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves the performance in a supervised node classification task.
arXiv Detail & Related papers (2022-09-17T10:54:00Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Online Graph Learning under Smoothness Priors [8.826181951806928]
We develop novel algorithms for online network topology inference given streaming observations assumed to be smooth on the sought graph.
Our goal is to track the (possibly) time-varying network topology while maintaining the memory and computational costs in check by processing graph signals sequentially-in-time.
Computer simulations using both synthetic and real financial market data illustrate the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2021-03-05T15:42:53Z) - Efficient Variational Bayesian Structure Learning of Dynamic Graphical
Models [19.591265962713837]
Estimating time-varying graphical models is of paramount importance in various social, financial, biological, and engineering systems.
Existing methods require extensive tuning of parameters that control the graph sparsity and temporal smoothness.
We propose a low-complexity tuning-free Bayesian approach, named BADGE.
arXiv Detail & Related papers (2020-09-16T14:19:23Z) - Time-varying Graph Representation Learning via Higher-Order Skip-Gram
with Negative Sampling [0.456877715768796]
We build upon the fact that the skip-gram embedding approach implicitly performs a matrix factorization.
We show that higher-order skip-gram with negative sampling is able to disentangle the role of nodes and time.
We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned time-varying graph representations outperform state-of-the-art methods.
arXiv Detail & Related papers (2020-06-25T12:04:48Z) - Graph Signal Processing -- Part III: Machine Learning on Graphs, from
Graph Topology to Applications [19.29066508374268]
Part III of this monograph starts by addressing ways to learn graph topology.
A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data.
For learning sparse graphs, the least absolute shrinkage and selection operator, known as LASSO is employed.
arXiv Detail & Related papers (2020-01-02T13:14:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.