Learning Graph Regularisation for Guided Super-Resolution
- URL: http://arxiv.org/abs/2203.14297v1
- Date: Sun, 27 Mar 2022 13:12:18 GMT
- Title: Learning Graph Regularisation for Guided Super-Resolution
- Authors: Riccardo de Lutio and Alexander Becker and Stefano D'Aronco and
Stefania Russo and Jan D. Wegner and Konrad Schindler
- Abstract summary: We introduce a novel formulation for guided super-resolution.
Its core is a differentiable optimisation layer that operates on a learned affinity graph.
We extensively evaluate our method on several datasets, and consistently outperform recent baselines in terms of quantitative reconstruction errors.
- Score: 77.7568596501908
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel formulation for guided super-resolution. Its core is a
differentiable optimisation layer that operates on a learned affinity graph.
The learned graph potentials make it possible to leverage rich contextual
information from the guide image, while the explicit graph optimisation within
the architecture guarantees rigorous fidelity of the high-resolution target to
the low-resolution source. With the decision to employ the source as a
constraint rather than only as an input to the prediction, our method differs
from state-of-the-art deep architectures for guided super-resolution, which
produce targets that, when downsampled, will only approximately reproduce the
source. This is not only theoretically appealing, but also produces crisper,
more natural-looking images. A key property of our method is that, although the
graph connectivity is restricted to the pixel lattice, the associated edge
potentials are learned with a deep feature extractor and can encode rich
context information over large receptive fields. By taking advantage of the
sparse graph connectivity, it becomes possible to propagate gradients through
the optimisation layer and learn the edge potentials from data. We extensively
evaluate our method on several datasets, and consistently outperform recent
baselines in terms of quantitative reconstruction errors, while also delivering
visually sharper outputs. Moreover, we demonstrate that our method generalises
particularly well to new datasets not seen during training.
Related papers
- Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation [9.395697548237333]
Graph autoencoders (GAEs) rely on distance-based criteria, such as mean-square-error (MSE) to reconstruct the input graph.
relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph.
We have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph.
arXiv Detail & Related papers (2024-06-25T12:54:35Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - A Complex Network based Graph Embedding Method for Link Prediction [0.0]
We present a novel graph embedding approach based on the popularity-similarity and local attraction paradigms.
We show, using extensive experimental analysis, that the proposed method outperforms state-of-the-art graph embedding algorithms.
arXiv Detail & Related papers (2022-09-11T14:46:38Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - A Light Heterogeneous Graph Collaborative Filtering Model using Textual
Information [16.73333758538986]
We exploit the relevant and easily accessible textual information by advanced natural language processing (NLP) models.
We propose a light RGCN-based (RGCN, relational graph convolutional network) collaborative filtering method on heterogeneous graphs.
arXiv Detail & Related papers (2020-10-04T11:10:37Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.