Connecting the Dots: A Knowledgeable Path Generator for Commonsense
Question Answering
- URL: http://arxiv.org/abs/2005.00691v2
- Date: Sat, 19 Sep 2020 16:38:45 GMT
- Title: Connecting the Dots: A Knowledgeable Path Generator for Commonsense
Question Answering
- Authors: Peifeng Wang, Nanyun Peng, Filip Ilievski, Pedro Szekely, Xiang Ren
- Abstract summary: This paper augments a general commonsense QA framework with a knowledgeable path generator.
By extrapolating over existing paths in a KG with a state-of-the-art language model, our generator learns to connect a pair of entities in text with a dynamic, and potentially novel, multi-hop relational path.
- Score: 50.72473345911147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense question answering (QA) requires background knowledge which is
not explicitly stated in a given context. Prior works use commonsense knowledge
graphs (KGs) to obtain this knowledge for reasoning. However, relying entirely
on these KGs may not suffice, considering their limited coverage and the
contextual dependence of their knowledge. In this paper, we augment a general
commonsense QA framework with a knowledgeable path generator. By extrapolating
over existing paths in a KG with a state-of-the-art language model, our
generator learns to connect a pair of entities in text with a dynamic, and
potentially novel, multi-hop relational path. Such paths can provide structured
evidence for solving commonsense questions without fine-tuning the path
generator. Experiments on two datasets show the superiority of our method over
previous works which fully rely on knowledge from KGs (with up to 6%
improvement in accuracy), across various amounts of training data. Further
evaluation suggests that the generated paths are typically interpretable,
novel, and relevant to the task.
Related papers
- A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning [17.676185326247946]
We propose a prompt-based KG foundation model via in-context learning, namely KG-ICL, to achieve a universal reasoning ability.
To encode prompt graphs with the generalization ability to unseen entities and relations in queries, we first propose a unified tokenizer.
Then, we propose two message passing neural networks to perform prompt encoding and KG reasoning, respectively.
arXiv Detail & Related papers (2024-10-16T06:47:18Z) - Knowledge Graph-Enhanced Large Language Models via Path Selection [58.228392005755026]
Large Language Models (LLMs) have shown unprecedented performance in various real-world applications.
LLMs are known to generate factually inaccurate outputs, a.k.a. the hallucination problem.
We propose a principled framework KELP with three stages to handle the above problems.
arXiv Detail & Related papers (2024-06-19T21:45:20Z) - Uncertainty Management in the Construction of Knowledge Graphs: a Survey [3.5639148953570845]
Knowledge Graphs (KGs) are a major asset for companies thanks to their great flexibility in data representation.
To build a KG it is a common practice to rely on automatic methods for extracting knowledge from various heterogeneous sources.
In a noisy and uncertain world, knowledge may not be reliable and conflicts between data sources may occur.
arXiv Detail & Related papers (2024-05-27T08:22:52Z) - Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - DisentQA: Disentangling Parametric and Contextual Knowledge with
Counterfactual Question Answering [34.70206857546496]
Question answering models commonly have access to two sources of "knowledge" during inference time.
It is unclear whether the answer stems from the given non-parametric knowledge or not.
We propose a new paradigm in which QA models are trained to disentangle the two sources of knowledge.
arXiv Detail & Related papers (2022-11-10T15:34:44Z) - Structured Knowledge Grounding for Question Answering [0.23068481501673416]
We propose to leverage the language and knowledge for knowledge based question-answering with flexibility, breadth of coverage and structured reasoning.
Specifically, we devise a knowledge construction method that retrieves the relevant context with a dynamic hop.
And we devise a deep fusion mechanism to further bridge the information exchanging bottleneck between the language and the knowledge.
arXiv Detail & Related papers (2022-09-17T08:48:50Z) - Exploiting Hybrid Semantics of Relation Paths for Multi-hop Question
Answering Over Knowledge Graphs [31.088325888508137]
This paper proposes improving multi-hop KGQA by exploiting relation paths' hybrid semantics.
We integrate explicit textual information and implicit KG structural features of relation paths based on a novel rotate-and-scale entity link prediction framework.
arXiv Detail & Related papers (2022-09-02T08:07:37Z) - BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from
Pretrained Language Models [65.51390418485207]
We propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs.
With minimal input of a relation definition, the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge.
We deploy the approach to harvest KGs of over 400 new relations from different LMs.
arXiv Detail & Related papers (2022-06-28T19:46:29Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Knowledge-Routed Visual Question Reasoning: Challenges for Deep
Representation Embedding [140.5911760063681]
We propose a novel dataset named Knowledge-Routed Visual Question Reasoning for VQA model evaluation.
We generate the question-answer pair based on both the Visual Genome scene graph and an external knowledge base with controlled programs.
arXiv Detail & Related papers (2020-12-14T00:33:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.