Accelerated Graph Learning from Smooth Signals
- URL: http://arxiv.org/abs/2110.09677v1
- Date: Tue, 19 Oct 2021 01:02:57 GMT
- Title: Accelerated Graph Learning from Smooth Signals
- Authors: Seyed Saman Saboksayr and Gonzalo Mateos
- Abstract summary: A fast dual-based proximal gradient algorithm is developed to tackle a strongly convex, smoothness-regularized network inverse problem.
Unlike existing solvers, the novel iterations come with global convergence rate guarantees and do not require additional step-size tuning.
- Score: 10.426964757656743
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We consider network topology identification subject to a signal smoothness
prior on the nodal observations. A fast dual-based proximal gradient algorithm
is developed to efficiently tackle a strongly convex, smoothness-regularized
network inverse problem known to yield high-quality graph solutions. Unlike
existing solvers, the novel iterations come with global convergence rate
guarantees and do not require additional step-size tuning. Reproducible
simulated tests demonstrate the effectiveness of the proposed method in
accurately recovering random and real-world graphs, markedly faster than
state-of-the-art alternatives and without incurring an extra computational
burden.
Related papers
- Online Proximal ADMM for Graph Learning from Streaming Smooth Signals [9.34612743192798]
We develop a novel algorithm for online graph learning using observation streams, assumed to be smooth on the latent graph.
Our modus operandi is to process graph signals sequentially and thus keep memory and computational costs in check.
The proposed approach also exhibits better tracking performance (in terms of suboptimality) when compared to state-of-the-art online graph learning baselines.
arXiv Detail & Related papers (2024-09-19T17:12:03Z) - Gradient-Variation Online Learning under Generalized Smoothness [56.38427425920781]
gradient-variation online learning aims to achieve regret guarantees that scale with variations in gradients of online functions.
Recent efforts in neural network optimization suggest a generalized smoothness condition, allowing smoothness to correlate with gradient norms.
We provide the applications for fast-rate convergence in games and extended adversarial optimization.
arXiv Detail & Related papers (2024-08-17T02:22:08Z) - Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks [23.01933325606068]
Existing complete verification techniques offer provable guarantees for all robustness queries but struggle to scale beyond small neural networks.
We propose a novel parameter search method to improve the quality of these linear approximations.
Specifically, we show that using a simple search method, carefully adapted to the given verification problem through state-of-the-art algorithm configuration techniques, improves the average global lower bound by 25% on average over the current state of the art.
arXiv Detail & Related papers (2024-06-14T16:16:26Z) - Gegenbauer Graph Neural Networks for Time-varying Signal Reconstruction [4.6210788730570584]
Time-varying graph signals are a critical problem in machine learning and signal processing with broad applications.
We propose a novel approach that incorporates a learning module to enhance the accuracy of the downstream task.
We conduct extensive experiments on real datasets to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-03-28T19:29:17Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Learning Sparse Graphs via Majorization-Minimization for Smooth Node
Signals [8.140698535149042]
We propose an algorithm for learning a sparse weighted graph by estimating its adjacency matrix.
We show that the proposed algorithm converges faster, in terms of the average number of iterations, than several existing methods in the literature.
arXiv Detail & Related papers (2022-02-06T17:06:13Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z) - A Novel Learnable Gradient Descent Type Algorithm for Non-convex
Non-smooth Inverse Problems [3.888272676868008]
We propose a novel type to solve inverse problems consisting general architecture and neural intimating.
Results that the proposed network outperforms the state reconstruction methods on different image problems in terms of efficiency and results.
arXiv Detail & Related papers (2020-03-15T03:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.