Ranking Facts for Explaining Answers to Elementary Science Questions
- URL: http://arxiv.org/abs/2110.09036v1
- Date: Mon, 18 Oct 2021 06:15:11 GMT
- Title: Ranking Facts for Explaining Answers to Elementary Science Questions
- Authors: Jennifer D'Souza and Isaiah Onando Mulang' and Soeren Auer
- Abstract summary: In elementary science exams, students select one answer from among typically four choices and can explain why they made that particular choice.
We consider the novel task of generating explanations for answers from human-authored facts.
Explanations are created from a human-annotated set of nearly 5,000 candidate facts in the WorldTree corpus.
- Score: 1.4091801425319965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multiple-choice exams, students select one answer from among typically
four choices and can explain why they made that particular choice. Students are
good at understanding natural language questions and based on their domain
knowledge can easily infer the question's answer by 'connecting the dots'
across various pertinent facts.
Considering automated reasoning for elementary science question answering, we
address the novel task of generating explanations for answers from
human-authored facts. For this, we examine the practically scalable framework
of feature-rich support vector machines leveraging domain-targeted,
hand-crafted features. Explanations are created from a human-annotated set of
nearly 5,000 candidate facts in the WorldTree corpus. Our aim is to obtain
better matches for valid facts of an explanation for the correct answer of a
question over the available fact candidates. To this end, our features offer a
comprehensive linguistic and semantic unification paradigm. The machine
learning problem is the preference ordering of facts, for which we test
pointwise regression versus pairwise learning-to-rank.
Our contributions are: (1) a case study in which two preference ordering
approaches are systematically compared; (2) it is a practically competent
approach that can outperform some variants of BERT-based reranking models; and
(3) the human-engineered features make it an interpretable machine learning
model for the task.
Related papers
- RECKONING: Reasoning through Dynamic Knowledge Encoding [51.076603338764706]
We show that language models can answer questions by reasoning over knowledge provided as part of the context.
In these situations, the model fails to distinguish the knowledge that is necessary to answer the question.
We propose teaching the model to reason more robustly by folding the provided contextual knowledge into the model's parameters.
arXiv Detail & Related papers (2023-05-10T17:54:51Z) - STREET: A Multi-Task Structured Reasoning and Explanation Benchmark [56.555662318619135]
We introduce a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
We expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer.
arXiv Detail & Related papers (2023-02-13T22:34:02Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Learn to Explain: Multimodal Reasoning via Thought Chains for Science
Question Answering [124.16250115608604]
We present Science Question Answering (SQA), a new benchmark that consists of 21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations.
We show that SQA improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA.
Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data.
arXiv Detail & Related papers (2022-09-20T07:04:24Z) - Single-Turn Debate Does Not Help Humans Answer Hard
Reading-Comprehension Questions [29.932543276414602]
We build a dataset of single arguments for both a correct and incorrect answer option in a debate-style set-up.
We use long contexts -- humans familiar with the context write convincing explanations for pre-selected correct and incorrect answers.
We test if those explanations allow humans who have not read the full context to more accurately determine the correct answer.
arXiv Detail & Related papers (2022-04-11T15:56:34Z) - REX: Reasoning-aware and Grounded Explanation [30.392986232906107]
We develop a new type of multi-modal explanations that explain the decisions by traversing the reasoning process and grounding keywords in the images.
Second, we identify the critical need to tightly couple important components across the visual and textual modalities for explaining the decisions.
Third, we propose a novel explanation generation method that explicitly models the pairwise correspondence between words and regions of interest.
arXiv Detail & Related papers (2022-03-11T17:28:42Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Tell me why! -- Explanations support learning of relational and causal
structure [24.434551113103105]
Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI.
We show that reinforcement learning agents might likewise benefit from explanations.
Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems.
arXiv Detail & Related papers (2021-12-07T15:09:06Z) - ExplanationLP: Abductive Reasoning for Explainable Science Question
Answering [4.726777092009554]
This paper frames question answering as an abductive reasoning problem.
We construct plausible explanations for each choice and then selecting the candidate with the best explanation as the final answer.
Our system, ExplanationLP, elicits explanations by constructing a weighted graph of relevant facts for each candidate answer.
arXiv Detail & Related papers (2020-10-25T14:49:24Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.