Knowledge-enhanced Iterative Instruction Generation and Reasoning for
Knowledge Base Question Answering
- URL: http://arxiv.org/abs/2209.03005v1
- Date: Wed, 7 Sep 2022 09:02:45 GMT
- Title: Knowledge-enhanced Iterative Instruction Generation and Reasoning for
Knowledge Base Question Answering
- Authors: Haowei Du, Quzhe Huang, Chen Zhang, and Dongyan Zhao
- Abstract summary: Multi-hop Knowledge Base Question Answering aims to find the answer entity in a knowledge base which is several hops from the topic entity mentioned in the question.
Existing Retrieval-based approaches first generate instructions from the question and then use them to guide the multi-hop reasoning on the knowledge graph.
We do experiments on two multi-hop KBQA benchmarks and outperform the existing approaches, becoming the new-state-of-the-art.
- Score: 43.72266327778216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-hop Knowledge Base Question Answering(KBQA) aims to find the answer
entity in a knowledge base which is several hops from the topic entity
mentioned in the question. Existing Retrieval-based approaches first generate
instructions from the question and then use them to guide the multi-hop
reasoning on the knowledge graph. As the instructions are fixed during the
whole reasoning procedure and the knowledge graph is not considered in
instruction generation, the model cannot revise its mistake once it predicts an
intermediate entity incorrectly. To handle this, we propose KBIGER(Knowledge
Base Iterative Instruction GEnerating and Reasoning), a novel and efficient
approach to generate the instructions dynamically with the help of reasoning
graph. Instead of generating all the instructions before reasoning, we take the
(k-1)-th reasoning graph into consideration to build the k-th instruction. In
this way, the model could check the prediction from the graph and generate new
instructions to revise the incorrect prediction of intermediate entities. We do
experiments on two multi-hop KBQA benchmarks and outperform the existing
approaches, becoming the new-state-of-the-art. Further experiments show our
method does detect the incorrect prediction of intermediate entities and has
the ability to revise such errors.
Related papers
- Question-guided Knowledge Graph Re-scoring and Injection for Knowledge Graph Question Answering [27.414670144354453]
KGQA involves answering natural language questions by leveraging structured information stored in a knowledge graph.
We propose a Question-guided Knowledge Graph Re-scoring method (Q-KGR) to eliminate noisy pathways for the input question.
We also introduce Knowformer, a parameter-efficient method for injecting the re-scored knowledge graph into large language models to enhance their ability to perform factual reasoning.
arXiv Detail & Related papers (2024-10-02T10:27:07Z) - KnowFormer: Revisiting Transformers for Knowledge Graph Reasoning [10.445709698341682]
We propose KnowFormer.KnowFormer to perform reasoning on knowledge graphs from the message-passing perspective.
To incorporate structural information into the self-attention mechanism, we introduce structure-aware modules to calculate query, key, and value.
Experimental results demonstrate the superior performance of KnowFormer compared to prominent baseline methods on both transductive and inductive benchmarks.
arXiv Detail & Related papers (2024-09-19T16:08:10Z) - Retrieved In-Context Principles from Previous Mistakes [55.109234526031884]
In-context learning (ICL) has been instrumental in adapting Large Language Models (LLMs) to downstream tasks using correct input-output examples.
Recent advances have attempted to improve model performance through principles derived from mistakes.
We propose Retrieved In-Context Principles (RICP), a novel teacher-student framework.
arXiv Detail & Related papers (2024-07-08T07:32:26Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Counterfactuals of Counterfactuals: a back-translation-inspired approach
to analyse counterfactual editors [3.4253416336476246]
We focus on the analysis of counterfactual, contrastive explanations.
We propose a new back translation-inspired evaluation methodology.
We show that by iteratively feeding the counterfactual to the explainer we can obtain valuable insights into the behaviour of both the predictor and the explainer models.
arXiv Detail & Related papers (2023-05-26T16:04:28Z) - Remembering for the Right Reasons: Explanations Reduce Catastrophic
Forgetting [100.75479161884935]
We propose a novel training paradigm called Remembering for the Right Reasons (RRR)
RRR stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions.
We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting.
arXiv Detail & Related papers (2020-10-04T10:05:27Z) - Scalable Multi-Hop Relational Reasoning for Knowledge-Aware Question
Answering [35.40919477319811]
We propose a novel knowledge-aware approach that equips pre-trained language models with a multi-hop relational reasoning module.
It performs multi-hop, multi-relational reasoning over subgraphs extracted from external knowledge graphs.
It unifies path-based reasoning methods and graph neural networks to achieve better interpretability and scalability.
arXiv Detail & Related papers (2020-05-01T23:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.