Supertagging Combinatory Categorial Grammar with Attentive Graph
Convolutional Networks
- URL: http://arxiv.org/abs/2010.06115v2
- Date: Wed, 18 Nov 2020 05:46:34 GMT
- Title: Supertagging Combinatory Categorial Grammar with Attentive Graph
Convolutional Networks
- Authors: Yuanhe Tian, Yan Song, Fei Xia
- Abstract summary: We propose attentive graph convolutional networks to enhance neural CCG supertagging through a novel solution of leveraging contextual information.
Experiments performed on the CCGbank demonstrate that our approach outperforms all previous studies in terms of both supertagging and parsing.
- Score: 34.74687603029737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supertagging is conventionally regarded as an important task for combinatory
categorial grammar (CCG) parsing, where effective modeling of contextual
information is highly important to this task. However, existing studies have
made limited efforts to leverage contextual features except for applying
powerful encoders (e.g., bi-LSTM). In this paper, we propose attentive graph
convolutional networks to enhance neural CCG supertagging through a novel
solution of leveraging contextual information. Specifically, we build the graph
from chunks (n-grams) extracted from a lexicon and apply attention over the
graph, so that different word pairs from the contexts within and across chunks
are weighted in the model and facilitate the supertagging accordingly. The
experiments performed on the CCGbank demonstrate that our approach outperforms
all previous studies in terms of both supertagging and parsing. Further
analyses illustrate the effectiveness of each component in our approach to
discriminatively learn from word pairs to enhance CCG supertagging.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
Scene Graph Generation [62.96628432641806]
Scene Graph Generation aims to first encode the visual contents within the given image and then parse them into a compact summary graph.
We first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction.
We then devise an innovative Group Collaborative Learning strategy to optimize the decoder.
arXiv Detail & Related papers (2022-03-18T09:14:13Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - Gossip and Attend: Context-Sensitive Graph Representation Learning [0.5493410630077189]
Graph representation learning (GRL) is a powerful technique for learning low-dimensional vector representation of high-dimensional and often sparse graphs.
We propose GOAT, a context-sensitive algorithm inspired by gossip communication and a mutual attention mechanism simply over the structure of the graph.
arXiv Detail & Related papers (2020-03-30T18:23:26Z) - Adaptive Graph Convolutional Network with Attention Graph Clustering for
Co-saliency Detection [35.23956785670788]
We present a novel adaptive graph convolutional network with attention graph clustering (GCAGC)
We develop an attention graph clustering algorithm to discriminate the common objects from all the salient foreground objects in an unsupervised fashion.
We evaluate our proposed GCAGC method on three cosaliency detection benchmark datasets.
arXiv Detail & Related papers (2020-03-13T09:35:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.