Compositionality-Aware Graph2Seq Learning
- URL: http://arxiv.org/abs/2201.12178v1
- Date: Fri, 28 Jan 2022 15:22:39 GMT
- Title: Compositionality-Aware Graph2Seq Learning
- Authors: Takeshi D. Itoh and Takatomi Kubo and Kazushi Ikeda
- Abstract summary: compositionality in a graph can be associated to the compositionality in the output sequence in many graph2seq tasks.
We adopt the multi-level attention pooling (MLAP) architecture, that can aggregate graph representations from multiple levels of information localities.
We demonstrate that the model having the MLAP architecture outperform the previous state-of-the-art model with more than seven times fewer parameters.
- Score: 2.127049691404299
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graphs are a highly expressive data structure, but it is often difficult for
humans to find patterns from a complex graph. Hence, generating
human-interpretable sequences from graphs have gained interest, called
graph2seq learning. It is expected that the compositionality in a graph can be
associated to the compositionality in the output sequence in many graph2seq
tasks. Therefore, applying compositionality-aware GNN architecture would
improve the model performance. In this study, we adopt the multi-level
attention pooling (MLAP) architecture, that can aggregate graph representations
from multiple levels of information localities. As a real-world example, we
take up the extreme source code summarization task, where a model estimate the
name of a program function from its source code. We demonstrate that the model
having the MLAP architecture outperform the previous state-of-the-art model
with more than seven times fewer parameters than it.
Related papers
- GraphLSS: Integrating Lexical, Structural, and Semantic Features for Long Document Extractive Summarization [19.505955857963855]
We present GraphLSS, a heterogeneous graph construction for long document extractive summarization.
It defines two levels of information (words and sentences) and four types of edges (sentence semantic similarity, sentence occurrence order, word in sentence, and word semantic similarity) without any need for auxiliary learning models.
arXiv Detail & Related papers (2024-10-25T23:48:59Z) - InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - GraphMaker: Can Diffusion Models Generate Large Attributed Graphs? [7.330479039715941]
Large-scale graphs with node attributes are increasingly common in various real-world applications.
Traditional graph generation methods are limited in their capacity to handle these complex structures.
This paper introduces a novel diffusion model, GraphMaker, specifically designed for generating large attributed graphs.
arXiv Detail & Related papers (2023-10-20T22:12:46Z) - Permutation Equivariant Graph Framelets for Heterophilous Graph Learning [6.679929638714752]
We develop a new way to implement multi-scale extraction via constructing Haar-type graph framelets.
We show that our model can achieve the best performance on certain datasets of heterophilous graphs.
arXiv Detail & Related papers (2023-06-07T09:05:56Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Deep Graph Mapper: Seeing Graphs through the Neural Lens [4.401427499962144]
We merge Mapper with the expressive power of Graph Neural Networks (GNNs) to produce hierarchical, topologically-grounded visualisations of graphs.
These visualisations do not only help discern the structure of complex graphs but also provide a means of understanding the models applied to them for solving various tasks.
arXiv Detail & Related papers (2020-02-10T15:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.