Meaningful Answer Generation of E-Commerce Question-Answering
- URL: http://arxiv.org/abs/2011.07307v1
- Date: Sat, 14 Nov 2020 14:05:30 GMT
- Title: Meaningful Answer Generation of E-Commerce Question-Answering
- Authors: Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao and Rui Yan
- Abstract summary: In e-commerce portals, generating answers for product-related questions has become a crucial task.
In this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG)
MPAG alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration.
- Score: 77.89755281215079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In e-commerce portals, generating answers for product-related questions has
become a crucial task. In this paper, we focus on the task of product-aware
answer generation, which learns to generate an accurate and complete answer
from large-scale unlabeled e-commerce reviews and product attributes. However,
safe answer problems pose significant challenges to text generation tasks, and
e-commerce question-answering task is no exception. To generate more meaningful
answers, in this paper, we propose a novel generative neural model, called the
Meaningful Product Answer Generator (MPAG), which alleviates the safe answer
problem by taking product reviews, product attributes, and a prototype answer
into consideration. Product reviews and product attributes are used to provide
meaningful content, while the prototype answer can yield a more diverse answer
pattern. To this end, we propose a novel answer generator with a review
reasoning module and a prototype answer reader. Our key idea is to obtain the
correct question-aware information from a large scale collection of reviews and
learn how to write a coherent and meaningful answer from an existing prototype
answer. To be more specific, we propose a read-and-write memory consisting of
selective writing units to conduct reasoning among these reviews. We then
employ a prototype reader consisting of comprehensive matching to extract the
answer skeleton from the prototype answer. Finally, we propose an answer editor
to generate the final answer by taking the question and the above parts as
input. Conducted on a real-world dataset collected from an e-commerce platform,
extensive experimental results show that our model achieves state-of-the-art
performance in terms of both automatic metrics and human evaluations. Human
evaluation also demonstrates that our model can consistently generate specific
and proper answers.
Related papers
- Answering Subjective Induction Questions on Products by Summarizing
Multi-sources Multi-viewpoints Knowledge [0.04791377777154766]
This paper proposes a new task in the field of Answering Subjective Induction Question on Products.
The answer to this kind of question is non-unique, but can be interpreted from many perspectives.
A satisfied answer should be able to summarize these subjective opinions from multiple sources and provide objective knowledge.
arXiv Detail & Related papers (2023-09-12T03:27:08Z) - How Do We Answer Complex Questions: Discourse Structure of Long-form
Answers [51.973363804064704]
We study the functional structure of long-form answers collected from three datasets.
Our main goal is to understand how humans organize information to craft complex answers.
Our work can inspire future research on discourse-level modeling and evaluation of long-form QA systems.
arXiv Detail & Related papers (2022-03-21T15:14:10Z) - Read before Generate! Faithful Long Form Question Answering with Machine
Reading [77.17898499652306]
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question.
We propose a new end-to-end framework that jointly models answer generation and machine reading.
arXiv Detail & Related papers (2022-03-01T10:41:17Z) - Towards Personalized Answer Generation in E-Commerce via
Multi-Perspective Preference Modeling [62.049330405736406]
Product Question Answering (PQA) on E-Commerce platforms has attracted increasing attention as it can act as an intelligent online shopping assistant.
It is insufficient to provide the same "completely summarized" answer to all customers, since many customers are more willing to see personalized answers with customized information only for themselves.
We propose a novel multi-perspective user preference model for generating personalized answers in PQA.
arXiv Detail & Related papers (2021-12-27T07:51:49Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - Answering Product-Questions by Utilizing Questions from Other
Contextually Similar Products [7.220014320991269]
We propose a novel and complementary approach for predicting the answer for subjective and opinion-based questions.
We measure the contextual similarity between products based on the answers they provide for the same question.
A mixture-of-expert framework is used to predict the answer by aggregating the answers from contextually similar products.
arXiv Detail & Related papers (2021-05-19T07:05:00Z) - Multi-Perspective Abstractive Answer Summarization [76.10437565615138]
Community Question Answering forums contain a rich resource of answers to a wide range of questions.
The goal of multi-perspective answer summarization is to produce a summary that includes all perspectives of the answer.
This work introduces a novel dataset creation method to automatically create multi-perspective, bullet-point abstractive summaries.
arXiv Detail & Related papers (2021-04-17T13:15:29Z) - E-commerce Query-based Generation based on User Review [1.484852576248587]
We propose a novel seq2seq based text generation model to generate answers to user's question based on reviews posted by previous users.
Given a user question and/or target sentiment polarity, we extract aspects of interest and generate an answer that summarizes previous relevant user reviews.
arXiv Detail & Related papers (2020-11-11T04:58:31Z) - Opinion-aware Answer Generation for Review-driven Question Answering in
E-Commerce [39.08269647808958]
rich information about personal opinions in product reviews is underutilized in current generation-based review-driven QA studies.
In this paper, we tackle opinion-aware answer generation by jointly learning answer generation and opinion mining tasks with a unified model.
Experimental results show that our method achieves superior performance in real-world E-Commerce QA datasets.
arXiv Detail & Related papers (2020-08-27T07:54:45Z) - ProtoQA: A Question Answering Dataset for Prototypical Common-Sense
Reasoning [35.6375880208001]
This paper introduces a new question answering dataset for training and evaluating common sense reasoning capabilities of artificial intelligence systems.
The training set is gathered from an existing set of questions played in a long-running international game show FAMILY- FEUD.
We also propose a generative evaluation task where a model has to output a ranked list of answers, ideally covering prototypical answers for a question.
arXiv Detail & Related papers (2020-05-02T09:40:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.