Optimization and Interpretability of Graph Attention Networks for Small
Sparse Graph Structures in Automotive Applications
- URL: http://arxiv.org/abs/2305.16196v1
- Date: Thu, 25 May 2023 15:55:59 GMT
- Title: Optimization and Interpretability of Graph Attention Networks for Small
Sparse Graph Structures in Automotive Applications
- Authors: Marion Neumeier, Andreas Tollk\"uhn, Sebastian Dorn, Michael Botsch,
Wolfgang Utschick
- Abstract summary: This work aims for a better understanding of the attention mechanism and analyzes its interpretability of identifying causal importance.
For automotive applications, the Graph Attention Network (GAT) is a prominently used architecture to include relational information of a traffic scenario during feature embedding.
- Score: 11.581071131903775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For automotive applications, the Graph Attention Network (GAT) is a
prominently used architecture to include relational information of a traffic
scenario during feature embedding. As shown in this work, however, one of the
most popular GAT realizations, namely GATv2, has potential pitfalls that hinder
an optimal parameter learning. Especially for small and sparse graph structures
a proper optimization is problematic. To surpass limitations, this work
proposes architectural modifications of GATv2. In controlled experiments, it is
shown that the proposed model adaptions improve prediction performance in a
node-level regression task and make it more robust to parameter initialization.
This work aims for a better understanding of the attention mechanism and
analyzes its interpretability of identifying causal importance.
Related papers
- Reconsidering the Performance of GAE in Link Prediction [27.038895601935195]
We investigate the potential of Graph Autoencoders (GAE)
Our findings reveal that a well-optimized GAE can match the performance of more complex models while offering greater computational efficiency.
arXiv Detail & Related papers (2024-11-06T11:29:47Z) - G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
for Graph Transformer Networks [0.7118812771905295]
We show that it is sub-optimal to directly transfer existing PEFTs to graph-based tasks due to the issue of feature distribution shift.
We propose a novel structure-aware PEFT approach, named G-Adapter, to guide the updating process.
Extensive experiments demonstrate that G-Adapter obtains the state-of-the-art performance compared to the counterparts on nine graph benchmark datasets.
arXiv Detail & Related papers (2023-05-17T16:10:36Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Adaptive Depth Graph Attention Networks [19.673509341792606]
The graph attention networks (GAT) is considered the most advanced learning architecture for graph representation.
We find that the main factor limiting the accuracy of the GAT model as the number of layers increases is the oversquashing phenomenon.
We propose a GAT variant model-ADGAT that adaptively selects the number of layers based on the sparsity of the graph.
arXiv Detail & Related papers (2023-01-16T05:22:29Z) - Self-Supervised Graph Structure Refinement for Graph Neural Networks [31.924317784535155]
Graph structure learning (GSL) aims to learn the adjacency matrix for graph neural networks (GNNs)
Most existing GSL works apply a joint learning framework where the estimated adjacency matrix and GNN parameters are optimized for downstream tasks.
We propose a graph structure refinement (GSR) framework with a pretrain-finetune pipeline.
arXiv Detail & Related papers (2022-11-12T02:01:46Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Affinity-Aware Graph Networks [9.888383815189176]
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data.
We explore the use of affinity measures as features in graph neural networks.
We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks.
arXiv Detail & Related papers (2022-06-23T18:51:35Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.