A Hierarchical N-Gram Framework for Zero-Shot Link Prediction
- URL: http://arxiv.org/abs/2204.10293v1
- Date: Sat, 16 Apr 2022 14:52:05 GMT
- Title: A Hierarchical N-Gram Framework for Zero-Shot Link Prediction
- Authors: Mingchen Li and Junfan Chen and Samuel Mensah and Nikolaos Aletras and
Xiulong Yang and Yang Ye
- Abstract summary: We propose a Hierarchical N-Gram framework for Zero-Shot Link Prediction (HNZSLP)
Our approach works by first constructing a hierarchical n-gram graph on the surface name to model the organizational structure of n-grams that leads to the surface name.
A GramTransformer, based on the Transformer is then presented to model the hierarchical n-gram graph to construct the relation embedding for ZSLP.
- Score: 28.776131952767333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the incompleteness of knowledge graphs (KGs), zero-shot link
prediction (ZSLP) which aims to predict unobserved relations in KGs has
attracted recent interest from researchers. A common solution is to use textual
features of relations (e.g., surface name or textual descriptions) as auxiliary
information to bridge the gap between seen and unseen relations. Current
approaches learn an embedding for each word token in the text. These methods
lack robustness as they suffer from the out-of-vocabulary (OOV) problem.
Meanwhile, models built on character n-grams have the capability of generating
expressive representations for OOV words. Thus, in this paper, we propose a
Hierarchical N-Gram framework for Zero-Shot Link Prediction (HNZSLP), which
considers the dependencies among character n-grams of the relation surface name
for ZSLP. Our approach works by first constructing a hierarchical n-gram graph
on the surface name to model the organizational structure of n-grams that leads
to the surface name. A GramTransformer, based on the Transformer is then
presented to model the hierarchical n-gram graph to construct the relation
embedding for ZSLP. Experimental results show the proposed HNZSLP achieved
state-of-the-art performance on two ZSLP datasets.
Related papers
- Graph-Dictionary Signal Model for Sparse Representations of Multivariate Data [49.77103348208835]
We define a novel Graph-Dictionary signal model, where a finite set of graphs characterizes relationships in data distribution through a weighted sum of their Laplacians.
We propose a framework to infer the graph dictionary representation from observed data, along with a bilinear generalization of the primal-dual splitting algorithm to solve the learning problem.
We exploit graph-dictionary representations in a motor imagery decoding task on brain activity data, where we classify imagined motion better than standard methods.
arXiv Detail & Related papers (2024-11-08T17:40:43Z) - A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Text2NKG: Fine-Grained N-ary Relation Extraction for N-ary relational Knowledge Graph Construction [20.281505340983035]
Text2NKG is a novel fine-grained n-ary relation extraction framework for n-ary relational knowledge graph construction.
We introduce a span-tuple classification approach with hetero-ordered merging and output merging to accomplish fine-grained n-ary relation extraction in different arity.
arXiv Detail & Related papers (2023-10-08T14:47:13Z) - Improving Graph-Based Text Representations with Character and Word Level
N-grams [30.699644290131044]
We propose a new word-character text graph that combines word and character n-gram nodes together with document nodes.
We also propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph.
arXiv Detail & Related papers (2022-10-12T08:07:54Z) - NC-DRE: Leveraging Non-entity Clue Information for Document-level
Relation Extraction [3.276435438007766]
Document-level relation extraction (RE) requires reasoning on multiple entities in different sentences to identify complex inter-sentence relations.
Previous studies usually employ graph neural networks (GNN) to perform inference upon heterogeneous document-graphs.
We propose a novel graph-based model NC-DRE that introduces decoder-to-encoder attention mechanism to leverage Non-entity Clue information for Document-level Relation Extraction.
arXiv Detail & Related papers (2022-04-01T07:30:26Z) - Text-based NP Enrichment [51.403543011094975]
We establish the task of text-based NP enrichment (TNE), that is, enriching each NP with all the preposition-mediated relations that hold between this and the other NPs in the text.
Humans recover such relations seamlessly, while current state-of-the-art models struggle with them due to the implicit nature of the problem.
We build the first large-scale dataset for the problem, provide the formal framing and scope of annotation, analyze the data, and report the result of fine-tuned neural language models on the task.
arXiv Detail & Related papers (2021-09-24T17:23:25Z) - Imposing Relation Structure in Language-Model Embeddings Using
Contrastive Learning [30.00047118880045]
We propose a novel contrastive learning framework that trains sentence embeddings to encode the relations in a graph structure.
The resulting relation-aware sentence embeddings achieve state-of-the-art results on the relation extraction task.
arXiv Detail & Related papers (2021-09-02T10:58:27Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Graph Neural Networks for Natural Language Processing: A Survey [64.36633422999905]
We present a comprehensive overview onGraph Neural Networks (GNNs) for Natural Language Processing.
We propose a new taxonomy of GNNs for NLP, which organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models.
arXiv Detail & Related papers (2021-06-10T23:59:26Z) - GraphPB: Graphical Representations of Prosody Boundary in Speech
Synthesis [23.836992815219904]
This paper introduces a graphical representation approach of prosody boundary (GraphPB) in the task of Chinese speech synthesis.
The nodes of the graph embedding are formed by prosodic words, and the edges are formed by the other prosodic boundaries.
Two techniques are proposed to embed sequential information into the graph-to-sequence text-to-speech model.
arXiv Detail & Related papers (2020-12-03T03:34:05Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.