Methods of Adaptive Signal Processing on Graphs Using Vertex-Time
Autoregressive Models
- URL: http://arxiv.org/abs/2003.05729v1
- Date: Tue, 10 Mar 2020 12:42:27 GMT
- Title: Methods of Adaptive Signal Processing on Graphs Using Vertex-Time
Autoregressive Models
- Authors: Thiernithi Variddhisai, Danilo Mandic
- Abstract summary: The concept of a random process has been recently extended to graph signals.
The online version of this problem was proposed via the adaptive framework.
Experiments are conducted to shed light on the potential, the potential, and the possible research attempt of this work.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The concept of a random process has been recently extended to graph signals,
whereby random graph processes are a class of multivariate stochastic processes
whose coefficients are matrices with a \textit{graph-topological} structure.
The system identification problem of a random graph process therefore revolves
around determining its underlying topology, or mathematically, the graph shift
operators (GSOs) i.e. an adjacency matrix or a Laplacian matrix. In the same
work that introduced random graph processes, a \textit{batch} optimization
method to solve for the GSO was also proposed for the random graph process
based on a \textit{causal} vertex-time autoregressive model. To this end, the
online version of this optimization problem was proposed via the framework of
adaptive filtering. The modified stochastic gradient projection method was
employed on the regularized least squares objective to create the filter. The
recursion is divided into 3 regularized sub-problems to address issues like
multi-convexity, sparsity, commutativity and bias. A discussion on convergence
analysis is also included. Finally, experiments are conducted to illustrate the
performance of the proposed algorithm, from traditional MSE measure to
successful recovery rate regardless correct values, all of which to shed light
on the potential, the limit and the possible research attempt of this work.
Related papers
- Convergence Guarantees for the DeepWalk Embedding on Block Models [9.898607871253775]
We show how to use the DeepWalk algorithm on graphs obtained from the Block Model (SBM)
Despite being simplistic, the SBM has proved to be a classic model for analyzing algorithms on large graphs.
arXiv Detail & Related papers (2024-10-26T18:35:11Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Taking the human out of decomposition-based optimization via artificial
intelligence: Part I. Learning when to decompose [0.0]
We propose a graph classification approach for automatically determining whether to use a monolithic or a decomposition-based solution method.
It is shown how the learned classifier can be incorporated into existing structural mixed integer optimization solvers.
arXiv Detail & Related papers (2023-10-10T23:31:06Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Statistical Inference of Constrained Stochastic Optimization via Sketched Sequential Quadratic Programming [53.63469275932989]
We consider online statistical inference of constrained nonlinear optimization problems.
We apply the Sequential Quadratic Programming (StoSQP) method to solve these problems.
arXiv Detail & Related papers (2022-05-27T00:34:03Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - KaFiStO: A Kalman Filtering Framework for Stochastic Optimization [27.64040983559736]
We show that when training neural networks the loss function changes over (iteration) time due to the randomized selection of a subset of the samples.
This randomization turns the optimization problem into an optimum one.
We propose to consider the loss as a noisy observation with respect to some reference.
arXiv Detail & Related papers (2021-07-07T16:13:57Z) - GTAdam: Gradient Tracking with Adaptive Momentum for Distributed Online
Optimization [4.103281325880475]
This paper deals with a network of computing agents aiming to solve an online optimization problem in a distributed fashion, by means of local computation and communication, without any central coordinator.
We propose the gradient tracking with adaptive momentum estimation (GTAdam) distributed algorithm, which combines a gradient tracking mechanism with first and second order momentum estimates of the gradient.
In these numerical experiments from multi-agent learning, GTAdam outperforms state-of-the-art distributed optimization methods.
arXiv Detail & Related papers (2020-09-03T15:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.