Key-Graph Transformer for Image Restoration
- URL: http://arxiv.org/abs/2402.02634v1
- Date: Sun, 4 Feb 2024 23:00:24 GMT
- Title: Key-Graph Transformer for Image Restoration
- Authors: Bin Ren, Yawei Li, Jingyun Liang, Rakesh Ranjan, Mengyuan Liu, Rita
Cucchiara, Luc Van Gool, Nicu Sebe
- Abstract summary: We introduce the Key-Graph Transformer (KGT) in this paper. Specifically, KGT views patch features as graph nodes.
The proposed Key-Graph Constructor efficiently forms a sparse yet representative Key-Graph by selectively connecting essential nodes instead of all the nodes.
- Score: 122.7334034968327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While it is crucial to capture global information for effective image
restoration (IR), integrating such cues into transformer-based methods becomes
computationally expensive, especially with high input resolution. Furthermore,
the self-attention mechanism in transformers is prone to considering
unnecessary global cues from unrelated objects or regions, introducing
computational inefficiencies. In response to these challenges, we introduce the
Key-Graph Transformer (KGT) in this paper. Specifically, KGT views patch
features as graph nodes. The proposed Key-Graph Constructor efficiently forms a
sparse yet representative Key-Graph by selectively connecting essential nodes
instead of all the nodes. Then the proposed Key-Graph Attention is conducted
under the guidance of the Key-Graph only among selected nodes with linear
computational complexity within each window. Extensive experiments across 6 IR
tasks confirm the proposed KGT's state-of-the-art performance, showcasing
advancements both quantitatively and qualitatively.
Related papers
- Task-Oriented Communication for Graph Data: A Graph Information Bottleneck Approach [12.451324619122405]
This paper introduces a method to extract a smaller, task-focused subgraph that maintains key information while reducing communication overhead.
Our approach utilizes graph neural networks (GNNs) and the graph information bottleneck (GIB) principle to create a compact, informative, and robust graph representation suitable for transmission.
arXiv Detail & Related papers (2024-09-04T14:01:56Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Transforming Graphs for Enhanced Attribute Clustering: An Innovative
Graph Transformer-Based Method [8.989218350080844]
This study introduces an innovative method known as the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC)
By melding the Graph Auto-Encoder with the Graph Transformer, GTAGC is adept at capturing global dependencies between nodes.
The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component.
arXiv Detail & Related papers (2023-06-20T06:04:03Z) - SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations [75.71298846760303]
We show that a one-layer attention can bring up surprisingly competitive performance across node property prediction benchmarks.
We frame the proposed scheme as Simplified Graph Transformers (SGFormer), which is empowered by a simple attention model.
We believe the proposed methodology alone enlightens a new technical path of independent interest for building Transformers on large graphs.
arXiv Detail & Related papers (2023-06-19T08:03:25Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Graph Reasoning Transformer for Image Parsing [67.76633142645284]
We propose a novel Graph Reasoning Transformer (GReaT) for image parsing to enable image patches to interact following a relation reasoning pattern.
Compared to the conventional transformer, GReaT has higher interaction efficiency and a more purposeful interaction pattern.
Results show that GReaT achieves consistent performance gains with slight computational overheads on the state-of-the-art transformer baselines.
arXiv Detail & Related papers (2022-09-20T08:21:37Z) - Graph Decipher: A transparent dual-attention graph neural network to
understand the message-passing mechanism for the node classification [2.0047096160313456]
We propose a new transparent network called Graph Decipher to investigate the message-passing mechanism.
Our algorithm achieves state-of-the-art performance while imposing a substantially lower burden under the node classification task.
arXiv Detail & Related papers (2022-01-04T23:24:00Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Simple and Effective Graph Autoencoders with One-Hop Linear Models [25.37082257457257]
We show that graph convolutional networks (GCN) encoders are unnecessarily complex for many applications.
We propose to replace them by significantly simpler and more interpretable linear models w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph.
arXiv Detail & Related papers (2020-01-21T15:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.