Multi-grained Evidence Inference for Multi-choice Reading Comprehension
- URL: http://arxiv.org/abs/2310.18070v1
- Date: Fri, 27 Oct 2023 11:36:18 GMT
- Title: Multi-grained Evidence Inference for Multi-choice Reading Comprehension
- Authors: Yilin Zhao, Hai Zhao and Sufeng Duan
- Abstract summary: Multi-choice Machine Reading (MRC) is a major and challenging task for machines to answer questions according to provided options.
We propose a novel general-purpose model enhancement which integrates multi-grained evidence comprehensively, named Multi-grained evidence inferencer (Mugen)
Mugen extracts three different granularities of evidence, and integrates evidence with the original passages, achieving significant and consistent performance improvement on four multi-choice MRC benchmarks.
- Score: 62.0773160298008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-choice Machine Reading Comprehension (MRC) is a major and challenging
task for machines to answer questions according to provided options. Answers in
multi-choice MRC cannot be directly extracted in the given passages, and
essentially require machines capable of reasoning from accurate extracted
evidence. However, the critical evidence may be as simple as just one word or
phrase, while it is hidden in the given redundant, noisy passage with multiple
linguistic hierarchies from phrase, fragment, sentence until the entire
passage. We thus propose a novel general-purpose model enhancement which
integrates multi-grained evidence comprehensively, named Multi-grained evidence
inferencer (Mugen), to make up for the inability. Mugen extracts three
different granularities of evidence: coarse-, middle- and fine-grained
evidence, and integrates evidence with the original passages, achieving
significant and consistent performance improvement on four multi-choice MRC
benchmarks.
Related papers
- Piecing It All Together: Verifying Multi-Hop Multimodal Claims [39.68850054331197]
We introduce a new task: multi-hop multimodal claim verification.
This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables.
We construct MMCV, a large-scale dataset comprising 16k multi-hop claims paired with multimodal evidence, with additional input from human feedback.
arXiv Detail & Related papers (2024-11-14T16:01:33Z) - KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering [35.87885118640294]
Large language models (LLMs) suffer from the hallucination problem and face significant challenges when applied to knowledge-intensive tasks.
We propose a novel Knowledge Selection of Large Language Models (KS-LLM) method, aiming to identify valuable information from evidence documents.
We first generate triples based on the input question, then select the evidence sentences most similar to triples from the evidence document, and finally combine the evidence sentences and triples to assist large language models in generating answers.
arXiv Detail & Related papers (2024-04-24T05:32:41Z) - AQE: Argument Quadruplet Extraction via a Quad-Tagging Augmented
Generative Approach [40.510976649949576]
We propose a challenging argument quadruplet extraction task (AQE)
AQE can provide an all-in-one extraction of four argumentative components, i.e., claims, evidence, evidence types, and stances.
We propose a novel quad-tagging augmented generative approach, which leverages a quadruplet tagging module to augment the training of the generative framework.
arXiv Detail & Related papers (2023-05-31T14:35:53Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Answering Questions by Meta-Reasoning over Multiple Chains of Thought [53.55653437903948]
We introduce Multi-Chain Reasoning (MCR), an approach which prompts large language models to meta-reason over multiple chains of thought.
MCR examines different reasoning chains, mixes information between them and selects the most relevant facts in generating an explanation and predicting the answer.
arXiv Detail & Related papers (2023-04-25T17:27:37Z) - Multi-hop Inference for Question-driven Summarization [39.08269647808958]
We propose a novel question-driven abstractive summarization method, Multi-hop Selective Generator (MSG)
MSG incorporates multi-hop reasoning into question-driven summarization and, meanwhile, provide justifications for the generated summaries.
Experimental results show that the proposed method consistently outperforms state-of-the-art methods on two non-factoid QA datasets.
arXiv Detail & Related papers (2020-10-08T02:36:39Z) - Composing Answer from Multi-spans for Reading Comprehension [77.32873012668783]
We present a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks.
The proposed method has a better performance on accurately generating long answers, and substantially outperforms two competitive typical one-span and Seq2Seq baseline decoders.
arXiv Detail & Related papers (2020-09-14T01:44:42Z) - A Self-Training Method for Machine Reading Comprehension with Soft
Evidence Extraction [89.88061141170512]
We present a Self-Training method (STM) to train machine reading comprehension models.
At each iteration, a base MRC model is trained with golden answers and noisy evidence labels.
The trained model will predict pseudo evidence labels as extra supervision in the next iteration.
arXiv Detail & Related papers (2020-05-11T15:26:07Z) - DUMA: Reading Comprehension with Transposition Thinking [107.89721765056281]
Multi-choice Machine Reading (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.
New DUal Multi-head Co-Attention (DUMA) model is inspired by human's transposition thinking process solving the multi-choice MRC problem.
arXiv Detail & Related papers (2020-01-26T07:35:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.