Answering Product-Questions by Utilizing Questions from Other
Contextually Similar Products
- URL: http://arxiv.org/abs/2105.08956v1
- Date: Wed, 19 May 2021 07:05:00 GMT
- Title: Answering Product-Questions by Utilizing Questions from Other
Contextually Similar Products
- Authors: Ohad Rozen, David Carmel, Avihai Mejer, Vitaly Mirkis, and Yftah Ziser
- Abstract summary: We propose a novel and complementary approach for predicting the answer for subjective and opinion-based questions.
We measure the contextual similarity between products based on the answers they provide for the same question.
A mixture-of-expert framework is used to predict the answer by aggregating the answers from contextually similar products.
- Score: 7.220014320991269
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting the answer to a product-related question is an emerging field of
research that recently attracted a lot of attention. Answering subjective and
opinion-based questions is most challenging due to the dependency on
customer-generated content. Previous works mostly focused on review-aware
answer prediction; however, these approaches fail for new or unpopular
products, having no (or only a few) reviews at hand. In this work, we propose a
novel and complementary approach for predicting the answer for such questions,
based on the answers for similar questions asked on similar products. We
measure the contextual similarity between products based on the answers they
provide for the same question. A mixture-of-expert framework is used to predict
the answer by aggregating the answers from contextually similar products.
Empirical results demonstrate that our model outperforms strong baselines on
some segments of questions, namely those that have roughly ten or more similar
resolved questions in the corpus. We additionally publish two large-scale
datasets used in this work, one is of similar product question pairs, and the
second is of product question-answer pairs.
Related papers
- Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - GooAQ: Open Question Answering with Diverse Answer Types [63.06454855313667]
We present GooAQ, a large-scale dataset with a variety of answer types.
This dataset contains over 5 million questions and 3 million answers collected from Google.
arXiv Detail & Related papers (2021-04-18T05:40:39Z) - Meaningful Answer Generation of E-Commerce Question-Answering [77.89755281215079]
In e-commerce portals, generating answers for product-related questions has become a crucial task.
In this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG)
MPAG alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration.
arXiv Detail & Related papers (2020-11-14T14:05:30Z) - Opinion-aware Answer Generation for Review-driven Question Answering in
E-Commerce [39.08269647808958]
rich information about personal opinions in product reviews is underutilized in current generation-based review-driven QA studies.
In this paper, we tackle opinion-aware answer generation by jointly learning answer generation and opinion mining tasks with a unified model.
Experimental results show that our method achieves superior performance in real-world E-Commerce QA datasets.
arXiv Detail & Related papers (2020-08-27T07:54:45Z) - Less is More: Rejecting Unreliable Reviews for Product Question
Answering [20.821416803824295]
Recent studies show that product reviews are a good source for real-time, automatic product question answering.
In this paper, we focus on the issue of answerability and answer reliability for PQA using reviews.
We propose a conformal prediction based framework to improve the reliability of PQA systems.
arXiv Detail & Related papers (2020-07-09T03:08:55Z) - Match$^2$: A Matching over Matching Model for Similar Question
Identification [74.7142127303489]
Community Question Answering (CQA) has become a primary means for people to acquire knowledge, where people are free to ask questions or submit answers.
Similar question identification becomes a core task in CQA which aims to find a similar question from the archived repository whenever a new question is asked.
It has long been a challenge to properly measure the similarity between two questions due to the inherent variation of natural language, i.e., there could be different ways to ask a same question or different questions sharing similar expressions.
Traditional methods typically take a one-side usage, which leverages the answer as some expanded representation of the
arXiv Detail & Related papers (2020-06-21T05:59:34Z) - Review-guided Helpful Answer Identification in E-commerce [38.276241153439955]
Product-specific community question answering platforms can greatly help address the concerns of potential customers.
The user-provided answers on such platforms often vary a lot in their qualities.
Helpfulness votes from the community can indicate the overall quality of the answer, but they are often missing.
arXiv Detail & Related papers (2020-03-13T11:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.