How Many Answers Should I Give? An Empirical Study of Multi-Answer
Reading Comprehension
- URL: http://arxiv.org/abs/2306.00435v1
- Date: Thu, 1 Jun 2023 08:22:21 GMT
- Title: How Many Answers Should I Give? An Empirical Study of Multi-Answer
Reading Comprehension
- Authors: Chen Zhang, Jiuheng Lin, Xiao Liu, Yuxuan Lai, Yansong Feng, Dongyan
Zhao
- Abstract summary: We design a taxonomy to categorize commonly-seen multi-answer MRC instances.
We analyze how well different paradigms of current multi-answer MRC models deal with different types of multi-answer instances.
- Score: 64.76737510530184
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The multi-answer phenomenon, where a question may have multiple answers
scattered in the document, can be well handled by humans but is challenging
enough for machine reading comprehension (MRC) systems. Despite recent progress
in multi-answer MRC, there lacks a systematic analysis of how this phenomenon
arises and how to better address it. In this work, we design a taxonomy to
categorize commonly-seen multi-answer MRC instances, with which we inspect
three multi-answer datasets and analyze where the multi-answer challenge comes
from. We further analyze how well different paradigms of current multi-answer
MRC models deal with different types of multi-answer instances. We find that
some paradigms capture well the key information in the questions while others
better model the relationship between questions and contexts. We thus explore
strategies to make the best of the strengths of different paradigms.
Experiments show that generation models can be a promising platform to
incorporate different paradigms. Our annotations and code are released for
further research.
Related papers
- Piecing It All Together: Verifying Multi-Hop Multimodal Claims [39.68850054331197]
We introduce a new task: multi-hop multimodal claim verification.
This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables.
We construct MMCV, a large-scale dataset comprising 16k multi-hop claims paired with multimodal evidence, with additional input from human feedback.
arXiv Detail & Related papers (2024-11-14T16:01:33Z) - Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent [102.31558123570437]
Multimodal Retrieval Augmented Generation (mRAG) plays an important role in mitigating the "hallucination" issue inherent in multimodal large language models (MLLMs)
We propose the first self-adaptive planning agent for multimodal retrieval, OmniSearch.
arXiv Detail & Related papers (2024-11-05T09:27:21Z) - AQA: Adaptive Question Answering in a Society of LLMs via Contextual Multi-Armed Bandit [59.10281630985958]
In question answering (QA), different questions can be effectively addressed with different answering strategies.
We develop a dynamic method that adaptively selects the most suitable QA strategy for each question.
Our experiments show that the proposed solution is viable for adaptive orchestration of a QA system with multiple modules.
arXiv Detail & Related papers (2024-09-20T12:28:18Z) - Brainstorming Brings Power to Large Language Models of Knowledge Reasoning [17.14501985068287]
Large Language Models (LLMs) have demonstrated amazing capabilities in language generation, text comprehension, and knowledge reasoning.
Recent studies have further improved the model's reasoning ability on a wide range of tasks by introducing multi-model collaboration.
We propose the multi-model brainstorming based on prompt. It incorporates different models into a group for brainstorming, and after multiple rounds of reasoning elaboration and re-inference, a consensus answer is reached.
arXiv Detail & Related papers (2024-06-02T14:47:14Z) - An Empirical Investigation into Benchmarking Model Multiplicity for
Trustworthy Machine Learning: A Case Study on Image Classification [0.8702432681310401]
This paper offers a one-stop empirical benchmark of multiplicity across various dimensions of model design.
We also develop a framework, which we call multiplicity sheets, to benchmark multiplicity in various scenarios.
We show that multiplicity persists in deep learning models even after enforcing additional specifications during model selection.
arXiv Detail & Related papers (2023-11-24T22:30:38Z) - Read, Look or Listen? What's Needed for Solving a Multimodal Dataset [7.0430001782867]
We propose a two-step method to analyze multimodal datasets, which leverages a small seed of human annotation to map each multimodal instance to the modalities required to process it.
We apply our approach to TVQA, a video question-answering dataset, and discover that most questions can be answered using a single modality, without a substantial bias towards any specific modality.
We analyze the MERLOT Reserve, finding that it struggles with image-based questions compared to text and audio, but also with auditory speaker identification.
arXiv Detail & Related papers (2023-07-06T08:02:45Z) - Multimodality Representation Learning: A Survey on Evolution,
Pretraining and Its Applications [47.501121601856795]
Multimodality Representation Learning is a technique of learning to embed information from different modalities and their correlations.
Cross-modal interaction and complementary information from different modalities are crucial for advanced models to perform any multimodal task.
This survey presents the literature on the evolution and enhancement of deep learning multimodal architectures.
arXiv Detail & Related papers (2023-02-01T11:48:34Z) - MetaQA: Combining Expert Agents for Multi-Skill Question Answering [49.35261724460689]
We argue that despite the promising results of multi-dataset models, some domains or QA formats might require specific architectures.
We propose to combine expert agents with a novel, flexible, and training-efficient architecture that considers questions, answer predictions, and answer-prediction confidence scores.
arXiv Detail & Related papers (2021-12-03T14:05:52Z) - Multi-Perspective Abstractive Answer Summarization [76.10437565615138]
Community Question Answering forums contain a rich resource of answers to a wide range of questions.
The goal of multi-perspective answer summarization is to produce a summary that includes all perspectives of the answer.
This work introduces a novel dataset creation method to automatically create multi-perspective, bullet-point abstractive summaries.
arXiv Detail & Related papers (2021-04-17T13:15:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.