On the Optimal Recovery of Graph Signals
- URL: http://arxiv.org/abs/2304.00474v2
- Date: Mon, 29 May 2023 22:54:51 GMT
- Title: On the Optimal Recovery of Graph Signals
- Authors: Simon Foucart, Chunyang Liao, Nate Veldt
- Abstract summary: We compute regularization parameters that are optimal or near-optimal for graph signal processing problems.
Our results offer a new interpretation for classical optimization techniques in graph-based learning.
We illustrate the potential of our methods in numerical experiments on several semi-synthetic graph signal processing datasets.
- Score: 10.098114696565865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a smooth graph signal from partially observed data is a well-studied
task in graph-based machine learning. We consider this task from the
perspective of optimal recovery, a mathematical framework for learning a
function from observational data that adopts a worst-case perspective tied to
model assumptions on the function to be learned. Earlier work in the optimal
recovery literature has shown that minimizing a regularized objective produces
optimal solutions for a general class of problems, but did not fully identify
the regularization parameter. Our main contribution provides a way to compute
regularization parameters that are optimal or near-optimal (depending on the
setting), specifically for graph signal processing problems. Our results offer
a new interpretation for classical optimization techniques in graph-based
learning and also come with new insights for hyperparameter selection. We
illustrate the potential of our methods in numerical experiments on several
semi-synthetic graph signal processing datasets.
Related papers
- Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Joint Graph Learning and Model Fitting in Laplacian Regularized
Stratified Models [5.933030735757292]
Laplacian regularized stratified models (LRSM) are models that utilize the explicit or implicit network structure of the sub-problems.
This paper shows the importance and sensitivity of graph weights in LRSM, and provably show that the sensitivity can be arbitrarily large.
We propose a generic approach to jointly learn the graph while fitting the model parameters by solving a single optimization problem.
arXiv Detail & Related papers (2023-05-04T06:06:29Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Structural Optimization of Factor Graphs for Symbol Detection via
Continuous Clustering and Machine Learning [1.5293427903448018]
We optimize the structure of the underlying factor graphs in an end-to-end manner using machine learning.
We study the combination of this approach with neural belief propagation, yielding near-maximum a posteriori symbol detection performance for specific channels.
arXiv Detail & Related papers (2022-11-21T12:31:04Z) - Sparse Graph Learning from Spatiotemporal Time Series [16.427698929775023]
We propose a graph learning framework that learns the relational dependencies as distributions over graphs.
We show that the proposed solution can be used as a stand-alone graph identification procedure as well as a graph learning component of an end-to-end forecasting architecture.
arXiv Detail & Related papers (2022-05-26T17:02:43Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Learning to Learn Graph Topologies [27.782971146122218]
We learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O)
The model is trained in an end-to-end fashion with pairs of node data and graph samples.
Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
arXiv Detail & Related papers (2021-10-19T08:42:38Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.