Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks
- URL: http://arxiv.org/abs/2104.10193v1
- Date: Tue, 20 Apr 2021 18:23:45 GMT
- Title: Identify, Align, and Integrate: Matching Knowledge Graphs to Commonsense
Reasoning Tasks
- Authors: Lisa Bauer, Mohit Bansal
- Abstract summary: It is critical to select a knowledge graph (KG) that is well-aligned with the given task's objective.
We show an approach to assess how well a candidate KG can correctly identify and accurately fill in gaps of reasoning for a task.
We show this KG-to-task match in 3 phases: knowledge-task identification, knowledge-task alignment, and knowledge-task integration.
- Score: 81.03233931066009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating external knowledge into commonsense reasoning tasks has shown
progress in resolving some, but not all, knowledge gaps in these tasks. For
knowledge integration to yield peak performance, it is critical to select a
knowledge graph (KG) that is well-aligned with the given task's objective. We
present an approach to assess how well a candidate KG can correctly identify
and accurately fill in gaps of reasoning for a task, which we call KG-to-task
match. We show this KG-to-task match in 3 phases: knowledge-task
identification, knowledge-task alignment, and knowledge-task integration. We
also analyze our transformer-based KG-to-task models via commonsense probes to
measure how much knowledge is captured in these models before and after KG
integration. Empirically, we investigate KG matches for the SocialIQA (SIQA)
(Sap et al., 2019b), Physical IQA (PIQA) (Bisk et al., 2020), and MCScript2.0
(Ostermann et al., 2019) datasets with 3 diverse KGs: ATOMIC (Sap et al.,
2019a), ConceptNet (Speer et al., 2017), and an automatically constructed
instructional KG based on WikiHow (Koupaee and Wang, 2018). With our methods we
are able to demonstrate that ATOMIC, an event-inference focused KG, is the best
match for SIQA and MCScript2.0, and that the taxonomic ConceptNet and
WikiHow-based KGs are the best matches for PIQA across all 3 analysis phases.
We verify our methods and findings with human evaluation.
Related papers
- Generate-on-Graph: Treat LLM as both Agent and KG in Incomplete Knowledge Graph Question Answering [87.67177556994525]
We propose a training-free method called Generate-on-Graph (GoG) to generate new factual triples while exploring Knowledge Graphs (KGs)
GoG performs reasoning through a Thinking-Searching-Generating framework, which treats LLM as both Agent and KG in IKGQA.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - Knowledge Graphs Querying [4.548471481431569]
We aim at uniting different interdisciplinary topics and concepts that have been developed for KG querying.
Recent advances on KG and query embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and computer vision domains.
arXiv Detail & Related papers (2023-05-23T19:32:42Z) - Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer [22.8376402253312]
We propose a knowledge graph pretraining model KGTransformer.
We pretrain KGTransformer with three self-supervised tasks with sampled sub-graphs as input.
We evaluate KGTransformer on three tasks, triple classification, zero-shot image classification, and question answering.
arXiv Detail & Related papers (2023-03-03T02:58:17Z) - ChatGPT versus Traditional Question Answering for Knowledge Graphs:
Current Status and Future Directions Towards Knowledge Graph Chatbots [7.2676028986202]
Conversational AI and Question-Answering systems (QASs) for knowledge graphs (KGs) are both emerging research areas.
QASs retrieve the most recent information from a KG by understanding and translating the natural language question into a formal query supported by the database engine.
Our framework compares two representative conversational models, ChatGPT and Galactica, against KGQAN, the current state-of-the-art QAS.
arXiv Detail & Related papers (2023-02-08T13:03:27Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Contrastive Representation Learning for Conversational Question
Answering over Knowledge Graphs [9.979689965471428]
This paper addresses the task of conversational question answering (ConvQA) over knowledge graphs (KGs)
The majority of existing ConvQA methods rely on full supervision signals with a strict assumption of the availability of gold logical forms of queries to extract answers from the KG.
We propose a contrastive representation learning-based approach to rank KG paths effectively.
arXiv Detail & Related papers (2022-10-09T23:11:58Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion [99.47414073164656]
A comprehensive knowledge graph (KG) contains an instance-level entity graph and an ontology-level concept graph.
The two-view KG provides a testbed for models to "simulate" human's abilities on knowledge abstraction, concretization, and completion.
We propose a unified KG benchmark by improving existing benchmarks in terms of dataset scale, task coverage, and difficulty.
arXiv Detail & Related papers (2020-04-28T16:21:57Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.