Answering Subjective Induction Questions on Products by Summarizing
Multi-sources Multi-viewpoints Knowledge
- URL: http://arxiv.org/abs/2309.05938v2
- Date: Fri, 6 Oct 2023 12:35:20 GMT
- Title: Answering Subjective Induction Questions on Products by Summarizing
Multi-sources Multi-viewpoints Knowledge
- Authors: Yufeng Zhang (1 and 2), Meng-xiang Wang (3), and Jianxing Yu (1, 2 and
4) ((1) School of Artificial Intelligence, Sun Yat-sen University, Zhuhai
519082 (2) Guangdong Key Laboratory of Big Data Analysis and Processing,
510006, China (3) China National Institute of Standardization, 100088, China
(4) Pazhou Lab, Guangzhou, 510330, China)
- Abstract summary: This paper proposes a new task in the field of Answering Subjective Induction Question on Products.
The answer to this kind of question is non-unique, but can be interpreted from many perspectives.
A satisfied answer should be able to summarize these subjective opinions from multiple sources and provide objective knowledge.
- Score: 0.04791377777154766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a new task in the field of Answering Subjective Induction
Question on Products (SUBJPQA). The answer to this kind of question is
non-unique, but can be interpreted from many perspectives. For example, the
answer to 'whether the phone is heavy' has a variety of different viewpoints. A
satisfied answer should be able to summarize these subjective opinions from
multiple sources and provide objective knowledge, such as the weight of a
phone. That is quite different from the traditional QA task, in which the
answer to a factoid question is unique and can be found from a single data
source. To address this new task, we propose a three-steps method. We first
retrieve all answer-related clues from multiple knowledge sources on facts and
opinions. The implicit commonsense facts are also collected to supplement the
necessary but missing contexts. We then capture their relevance with the
questions by interactive attention. Next, we design a reinforcement-based
summarizer to aggregate all these knowledgeable clues. Based on a
template-controlled decoder, we can output a comprehensive and
multi-perspective answer. Due to the lack of a relevant evaluated benchmark set
for the new task, we construct a large-scale dataset, named SupQA, consisting
of 48,352 samples across 15 product domains. Evaluation results show the
effectiveness of our approach.
Related papers
- Aspect-oriented Consumer Health Answer Summarization [2.298110639419913]
Community Question-Answering (CQA) forums have revolutionized how people seek information, especially those related to their healthcare needs.
There can be several answers in response to a single query, which makes it hard to grasp the key information related to the specific health concern.
Our research focuses on aspect-based summarization of health answers to address this limitation.
arXiv Detail & Related papers (2024-05-10T07:52:43Z) - SEMQA: Semi-Extractive Multi-Source Question Answering [94.04430035121136]
We introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion.
We create the first dataset of this kind, QuoteSum, with human-written semi-extractive answers to natural and generated questions.
arXiv Detail & Related papers (2023-11-08T18:46:32Z) - Concise Answers to Complex Questions: Summarization of Long-form Answers [27.190319030219285]
We conduct a user study on summarized answers generated from state-of-the-art models and our newly proposed extract-and-decontextualize approach.
We find a large proportion of long-form answers can be adequately summarized by at least one system, while complex and implicit answers are challenging to compress.
We observe that decontextualization improves the quality of the extractive summary, exemplifying its potential in the summarization task.
arXiv Detail & Related papers (2023-05-30T17:59:33Z) - MQAG: Multiple-choice Question Answering and Generation for Assessing
Information Consistency in Summarization [55.60306377044225]
State-of-the-art summarization systems can generate highly fluent summaries.
These summaries, however, may contain factual inconsistencies and/or information not present in the source.
We introduce an alternative scheme based on standard information-theoretic measures in which the information present in the source and summary is directly compared.
arXiv Detail & Related papers (2023-01-28T23:08:25Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - A Dataset of Information-Seeking Questions and Answers Anchored in
Research Papers [66.11048565324468]
We present a dataset of 5,049 questions over 1,585 Natural Language Processing papers.
Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.
We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers.
arXiv Detail & Related papers (2021-05-07T00:12:34Z) - Multi-Perspective Abstractive Answer Summarization [76.10437565615138]
Community Question Answering forums contain a rich resource of answers to a wide range of questions.
The goal of multi-perspective answer summarization is to produce a summary that includes all perspectives of the answer.
This work introduces a novel dataset creation method to automatically create multi-perspective, bullet-point abstractive summaries.
arXiv Detail & Related papers (2021-04-17T13:15:29Z) - Meaningful Answer Generation of E-Commerce Question-Answering [77.89755281215079]
In e-commerce portals, generating answers for product-related questions has become a crucial task.
In this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG)
MPAG alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration.
arXiv Detail & Related papers (2020-11-14T14:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.