Learning Graphs from Smooth Signals under Moment Uncertainty
- URL: http://arxiv.org/abs/2105.05458v1
- Date: Wed, 12 May 2021 06:47:34 GMT
- Title: Learning Graphs from Smooth Signals under Moment Uncertainty
- Authors: Xiaolu Wang, Yuen-Man Pun, Anthony Man-Cho So
- Abstract summary: We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
- Score: 23.868075779606425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of inferring the graph structure from a given set of
smooth graph signals. The number of perceived graph signals is always finite
and possibly noisy, thus the statistical properties of the data distribution is
ambiguous. Traditional graph learning models do not take this distributional
uncertainty into account, thus performance may be sensitive to different sets
of data. In this paper, we propose a distributionally robust approach to graph
learning, which incorporates the first and second moment uncertainty into the
smooth graph learning model. Specifically, we cast our graph learning model as
a minimax optimization problem, and further reformulate it as a nonconvex
minimization problem with linear constraints. In our proposed formulation, we
find a theoretical interpretation of the Laplacian regularizer, which is
adopted in many existing works in an intuitive manner. Although the first
moment uncertainty leads to an annoying square root term in the objective
function, we prove that it enjoys the smoothness property with probability 1
over the entire constraint. We develop a efficient projected gradient descent
(PGD) method and establish its global iterate convergence to a critical point.
We conduct extensive experiments on both synthetic and real data to verify the
effectiveness of our model and the efficiency of the PGD algorithm. Compared
with the state-of-the-art smooth graph learning methods, our approach exhibits
superior and more robust performance across different populations of signals in
terms of various evaluation metrics.
Related papers
- GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Gradient scarcity with Bilevel Optimization for Graph Learning [0.0]
gradient scarcity occurs when learning graphs by minimizing a loss on a subset of nodes causes edges between unlabelled nodes that are far from labelled ones to receive zero gradients.
We give a precise mathematical characterization of this phenomenon, and prove it also emerges in bilevel optimization.
To alleviate this issue, we study several solutions: we propose to resort to latent graph learning using a Graph-to-Graph model (G2G), graph regularization to impose a prior structure on the graph, or optimizing on a larger graph than the original one with a reduced diameter.
arXiv Detail & Related papers (2023-03-24T12:37:43Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Graph Denoising with Framelet Regularizer [25.542429117462547]
This paper tailors regularizers for graph data in terms of both feature and structure noises.
Our model achieves significantly better performance compared with popular graph convolutions even when the graph is heavily contaminated.
arXiv Detail & Related papers (2021-11-05T05:17:23Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Robust Graph Learning Under Wasserstein Uncertainty [35.85333465732067]
In many scenarios, an accurate graph structure representing signals is not available at all.
We propose a graph learning framework using Wasserstein distributionally robust optimization (WDRO)
We show that our scheme can learn a reliable graph in the context of uncertainty.
arXiv Detail & Related papers (2021-05-10T09:09:44Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.