Visual Question Rewriting for Increasing Response Rate
- URL: http://arxiv.org/abs/2106.02257v1
- Date: Fri, 4 Jun 2021 04:46:47 GMT
- Title: Visual Question Rewriting for Increasing Response Rate
- Authors: Jiayi Wei, Xilian Li, Yi Zhang, Xin Wang
- Abstract summary: We explore how to automatically rewrite natural language questions to improve the response rate from people.
A new task of Visual Question Rewriting(VQR) task is introduced to explore how visual information can be used to improve the new questions.
- Score: 12.700769102964088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When a human asks questions online, or when a conversational virtual agent
asks human questions, questions triggering emotions or with details might more
likely to get responses or answers. we explore how to automatically rewrite
natural language questions to improve the response rate from people. In
particular, a new task of Visual Question Rewriting(VQR) task is introduced to
explore how visual information can be used to improve the new questions. A data
set containing around 4K bland questions, attractive questions and images
triples is collected. We developed some baseline sequence to sequence models
and more advanced transformer based models, which take a bland question and a
related image as input and output a rewritten question that is expected to be
more attractive. Offline experiments and mechanical Turk based evaluations show
that it is possible to rewrite bland questions in a more detailed and
attractive way to increase the response rate, and images can be helpful.
Related papers
- Long-Form Answers to Visual Questions from Blind and Low Vision People [54.00665222249701]
VizWiz-LF is a dataset of long-form answers to visual questions posed by blind and low vision (BLV) users.
We develop and annotate functional roles of sentences of LFVQA and demonstrate that long-form answers contain information beyond the question answer.
arXiv Detail & Related papers (2024-08-12T17:15:02Z) - Ask Questions with Double Hints: Visual Question Generation with Answer-awareness and Region-reference [107.53380946417003]
We propose a novel learning paradigm to generate visual questions with answer-awareness and region-reference.
We develop a simple methodology to self-learn the visual hints without introducing any additional human annotations.
arXiv Detail & Related papers (2024-07-06T15:07:32Z) - Which questions should I answer? Salience Prediction of Inquisitive Questions [118.097974193544]
We show that highly salient questions are empirically more likely to be answered in the same article.
We further validate our findings by showing that answering salient questions is an indicator of summarization quality in news.
arXiv Detail & Related papers (2024-04-16T21:33:05Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Weakly Supervised Visual Question Answer Generation [2.7605547688813172]
We present a weakly supervised method that synthetically generates question-answer pairs procedurally from visual information and captions.
We perform an exhaustive experimental analysis on VQA dataset and see that our model significantly outperforms SOTA methods on BLEU scores.
arXiv Detail & Related papers (2023-06-11T08:46:42Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Unified Questioner Transformer for Descriptive Question Generation in
Goal-Oriented Visual Dialogue [0.0]
Building an interactive artificial intelligence that can ask questions about the real world is one of the biggest challenges for vision and language problems.
We propose a novel Questioner architecture, called Unified Questioner Transformer (UniQer)
We build a goal-oriented visual dialogue task called CLEVR Ask. It synthesizes complex scenes that require the Questioner to generate descriptive questions.
arXiv Detail & Related papers (2021-06-29T16:36:34Z) - GooAQ: Open Question Answering with Diverse Answer Types [63.06454855313667]
We present GooAQ, a large-scale dataset with a variety of answer types.
This dataset contains over 5 million questions and 3 million answers collected from Google.
arXiv Detail & Related papers (2021-04-18T05:40:39Z) - Generating Natural Questions from Images for Multimodal Assistants [4.930442416763205]
We present an approach for generating diverse and meaningful questions that consider image content and metadata of image.
We evaluate our approach using standard evaluation metrics such as BLEU, METEOR, ROUGE, and CIDEr.
arXiv Detail & Related papers (2020-11-17T19:12:23Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.