Distinguish Before Answer: Generating Contrastive Explanation as
Knowledge for Commonsense Question Answering
- URL: http://arxiv.org/abs/2305.08135v2
- Date: Sun, 21 May 2023 15:07:23 GMT
- Title: Distinguish Before Answer: Generating Contrastive Explanation as
Knowledge for Commonsense Question Answering
- Authors: Qianglong Chen, Guohai Xu, Ming Yan, Ji Zhang, Fei Huang, Luo Si and
Yin Zhang
- Abstract summary: We propose CPACE, a concept-centric Prompt-bAsed Contrastive Explanation Generation model.
CPACE converts obtained symbolic knowledge into a contrastive explanation for better distinguishing the differences among given candidates.
We conduct a series of experiments on three widely-used question-answering datasets: CSQA, QASC, and OBQA.
- Score: 61.53454387743701
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Existing knowledge-enhanced methods have achieved remarkable results in
certain QA tasks via obtaining diverse knowledge from different knowledge
bases. However, limited by the properties of retrieved knowledge, they still
have trouble benefiting from both the knowledge relevance and distinguishment
simultaneously. To address the challenge, we propose CPACE, a Concept-centric
Prompt-bAsed Contrastive Explanation Generation model, which aims to convert
obtained symbolic knowledge into a contrastive explanation for better
distinguishing the differences among given candidates. Firstly, following
previous works, we retrieve different types of symbolic knowledge with a
concept-centric knowledge extraction module. After that, we generate
corresponding contrastive explanations using acquired symbolic knowledge and
explanation prompts as guidance for better modeling the knowledge
distinguishment and interpretability. Finally, we regard the generated
contrastive explanation as external knowledge for downstream task enhancement.
We conduct a series of experiments on three widely-used question-answering
datasets: CSQA, QASC, and OBQA. Experimental results demonstrate that with the
help of generated contrastive explanation, our CPACE model achieves new SOTA on
CSQA (89.8% on the testing set, 0.9% higher than human performance), and gains
impressive improvement on QASC and OBQA (4.2% and 3.5%, respectively).
Related papers
- Knowledge Condensation and Reasoning for Knowledge-based VQA [20.808840633377343]
Recent studies retrieve the knowledge passages from external knowledge bases and then use them to answer questions.
We propose two synergistic models: Knowledge Condensation model and Knowledge Reasoning model.
Our method achieves state-of-the-art performance on knowledge-based VQA datasets.
arXiv Detail & Related papers (2024-03-15T06:06:06Z) - Rainier: Reinforced Knowledge Introspector for Commonsense Question
Answering [74.90418840431425]
We present Rainier, or Reinforced Knowledge Introspector, that learns to generate contextually relevant knowledge in response to given questions.
Our approach starts by imitating knowledge generated by GPT-3, then learns to generate its own knowledge via reinforcement learning.
Our work is the first to report that knowledge generated by models that are orders of magnitude smaller than GPT-3, even without direct supervision on the knowledge itself, can exceed the quality of knowledge elicited from GPT-3 for commonsense QA.
arXiv Detail & Related papers (2022-10-06T17:34:06Z) - Uncertainty-based Visual Question Answering: Estimating Semantic
Inconsistency between Image and Knowledge Base [0.7081604594416336]
KVQA task aims to answer questions that require additional external knowledge as well as an understanding of images and questions.
Recent studies on KVQA inject an external knowledge in a multi-modal form, and as more knowledge is used, irrelevant information may be added and can confuse the question answering.
arXiv Detail & Related papers (2022-07-27T01:58:29Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - Enhancing Question Generation with Commonsense Knowledge [33.289599417096206]
We propose a multi-task learning framework to introduce commonsense knowledge into question generation process.
Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics.
arXiv Detail & Related papers (2021-06-19T08:58:13Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.