Leveraging Semantic Parsing for Relation Linking over Knowledge Bases
- URL: http://arxiv.org/abs/2009.07726v1
- Date: Wed, 16 Sep 2020 14:56:11 GMT
- Title: Leveraging Semantic Parsing for Relation Linking over Knowledge Bases
- Authors: Nandana Mihindukulasooriya, Gaetano Rossiello, Pavan Kapanipathi,
Ibrahim Abdelaziz, Srinivas Ravishankar, Mo Yu, Alfio Gliozzo, Salim Roukos
and Alexander Gray
- Abstract summary: We present SLING, a relation linking framework which leverages semantic parsing using AMR and distant supervision.
SLING integrates multiple relation linking approaches that capture complementary signals such as linguistic cues, rich semantic representation, and information from the knowledgebase.
experiments on relation linking using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the proposed approach achieves state-of-the-art performance on all benchmarks.
- Score: 80.99588366232075
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledgebase question answering systems are heavily dependent on relation
extraction and linking modules. However, the task of extracting and linking
relations from text to knowledgebases faces two primary challenges; the
ambiguity of natural language and lack of training data. To overcome these
challenges, we present SLING, a relation linking framework which leverages
semantic parsing using Abstract Meaning Representation (AMR) and distant
supervision. SLING integrates multiple relation linking approaches that capture
complementary signals such as linguistic cues, rich semantic representation,
and information from the knowledgebase. The experiments on relation linking
using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the
proposed approach achieves state-of-the-art performance on all benchmarks.
Related papers
- Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Dual Semantic Knowledge Composed Multimodal Dialog Systems [114.52730430047589]
We propose a novel multimodal task-oriented dialog system named MDS-S2.
It acquires the context related attribute and relation knowledge from the knowledge base.
We also devise a set of latent query variables to distill the semantic information from the composed response representation.
arXiv Detail & Related papers (2023-05-17T06:33:26Z) - Document-level Relation Extraction with Relation Correlations [15.997345900917058]
Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem.
We analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time.
arXiv Detail & Related papers (2022-12-20T11:17:52Z) - Relation-Aware Language-Graph Transformer for Question Answering [21.244992938222246]
We propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations.
Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations.
We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE.
arXiv Detail & Related papers (2022-12-02T05:10:10Z) - REKnow: Enhanced Knowledge for Joint Entity and Relation Extraction [30.829001748700637]
Relation extraction is a challenging task that aims to extract all hidden relational facts from the text.
There is no unified framework that works well under various relation extraction settings.
We propose a knowledge-enhanced generative model to mitigate these two issues.
Our model achieves superior performance on multiple benchmarks and settings, including WebNLG, NYT10, and TACRED.
arXiv Detail & Related papers (2022-06-10T13:59:38Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - Prompt-based Zero-shot Relation Extraction with Semantic Knowledge
Augmentation [3.154631846975021]
In relation triplet extraction, recognizing unseen relations for which there are no training instances is a challenging task.
We propose a prompt-based model with semantic knowledge augmentation (ZS-SKA) to recognize unseen relations under the zero-shot setting.
arXiv Detail & Related papers (2021-12-08T19:34:27Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - Question Answering over Knowledge Bases by Leveraging Semantic Parsing
and Neuro-Symbolic Reasoning [73.00049753292316]
We propose a semantic parsing and reasoning-based Neuro-Symbolic Question Answering(NSQA) system.
NSQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0.
arXiv Detail & Related papers (2020-12-03T05:17:55Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.