Optimization and Interpretability of Graph Attention Networks for Small
Sparse Graph Structures in Automotive Applications
- URL: http://arxiv.org/abs/2305.16196v1
- Date: Thu, 25 May 2023 15:55:59 GMT
- Title: Optimization and Interpretability of Graph Attention Networks for Small
Sparse Graph Structures in Automotive Applications
- Authors: Marion Neumeier, Andreas Tollk\"uhn, Sebastian Dorn, Michael Botsch,
Wolfgang Utschick
- Abstract summary: This work aims for a better understanding of the attention mechanism and analyzes its interpretability of identifying causal importance.
For automotive applications, the Graph Attention Network (GAT) is a prominently used architecture to include relational information of a traffic scenario during feature embedding.
- Score: 11.581071131903775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For automotive applications, the Graph Attention Network (GAT) is a
prominently used architecture to include relational information of a traffic
scenario during feature embedding. As shown in this work, however, one of the
most popular GAT realizations, namely GATv2, has potential pitfalls that hinder
an optimal parameter learning. Especially for small and sparse graph structures
a proper optimization is problematic. To surpass limitations, this work
proposes architectural modifications of GATv2. In controlled experiments, it is
shown that the proposed model adaptions improve prediction performance in a
node-level regression task and make it more robust to parameter initialization.
This work aims for a better understanding of the attention mechanism and
analyzes its interpretability of identifying causal importance.
Related papers
- VecFormer: Towards Efficient and Generalizable Graph Transformer with Graph Token Attention [61.96837866507746]
VecFormer is an efficient and highly generalizable model for node classification.<n>VecFormer outperforms the existing Graph Transformer in both performance and speed.
arXiv Detail & Related papers (2026-02-23T09:10:39Z) - GADPN: Graph Adaptive Denoising and Perturbation Networks via Singular Value Decomposition [6.24191713518868]
GADPN is a graph structure learning framework that adaptively refines graph topology via low-rank denoising and generalized structural perturbation.<n>It achieves state-of-the-art performance while significantly improving efficiency.<n>It shows particularly strong gains on challenging disassortative graphs, validating its ability to robustly learn enhanced graph structures.
arXiv Detail & Related papers (2026-01-13T05:25:32Z) - AutoGraph-R1: End-to-End Reinforcement Learning for Knowledge Graph Construction [60.51319139563509]
We introduce AutoGraph-R1, the first framework to directly optimize KG construction for task performance using Reinforcement Learning (RL)<n>We design two novel, task-aware reward functions, one for graphs as knowledge carriers and another as knowledge indices.<n>Our work shows it is possible to close the loop between construction and application, shifting the paradigm from building intrinsically good'' graphs to building demonstrably useful'' ones.
arXiv Detail & Related papers (2025-10-17T06:03:36Z) - Transfer Learning on Edge Connecting Probability Estimation under Graphon Model [7.805468525082696]
We propose a transfer learning framework that integrates neighborhood smoothing and Gromov-Wasserstein optimal transport to align and transfer structural patterns between graphs.<n>GTRANS includes an adaptive debiasing mechanism that identifies and corrects for target-specific deviations via residual smoothing.<n>These improvements translate directly to enhanced performance in downstream applications, such as the graph classification task and the link prediction task.
arXiv Detail & Related papers (2025-10-07T02:37:12Z) - GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - Visualization and Analysis of the Loss Landscape in Graph Neural Networks [8.389368477330612]
Graph Neural Networks (GNNs) are powerful models for graph-structured data, with broad applications.<n>We introduce an efficient learnable dimensionality reduction method for visualizing GNN loss landscapes.<n>We analyze the effects of over-smoothing, jumping knowledge, quantization, sparsification, and preconditioner GNN optimization.
arXiv Detail & Related papers (2025-09-15T11:22:55Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
for Graph Transformer Networks [0.7118812771905295]
We show that it is sub-optimal to directly transfer existing PEFTs to graph-based tasks due to the issue of feature distribution shift.
We propose a novel structure-aware PEFT approach, named G-Adapter, to guide the updating process.
Extensive experiments demonstrate that G-Adapter obtains the state-of-the-art performance compared to the counterparts on nine graph benchmark datasets.
arXiv Detail & Related papers (2023-05-17T16:10:36Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Adaptive Depth Graph Attention Networks [19.673509341792606]
The graph attention networks (GAT) is considered the most advanced learning architecture for graph representation.
We find that the main factor limiting the accuracy of the GAT model as the number of layers increases is the oversquashing phenomenon.
We propose a GAT variant model-ADGAT that adaptively selects the number of layers based on the sparsity of the graph.
arXiv Detail & Related papers (2023-01-16T05:22:29Z) - Self-Supervised Graph Structure Refinement for Graph Neural Networks [31.924317784535155]
Graph structure learning (GSL) aims to learn the adjacency matrix for graph neural networks (GNNs)
Most existing GSL works apply a joint learning framework where the estimated adjacency matrix and GNN parameters are optimized for downstream tasks.
We propose a graph structure refinement (GSR) framework with a pretrain-finetune pipeline.
arXiv Detail & Related papers (2022-11-12T02:01:46Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Affinity-Aware Graph Networks [9.888383815189176]
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data.
We explore the use of affinity measures as features in graph neural networks.
We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks.
arXiv Detail & Related papers (2022-06-23T18:51:35Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.