Less is More: Rejecting Unreliable Reviews for Product Question
Answering
- URL: http://arxiv.org/abs/2007.04526v1
- Date: Thu, 9 Jul 2020 03:08:55 GMT
- Title: Less is More: Rejecting Unreliable Reviews for Product Question
Answering
- Authors: Shiwei Zhang, Xiuzhen Zhang, Jey Han Lau, Jeffrey Chan, and Cecile
Paris
- Abstract summary: Recent studies show that product reviews are a good source for real-time, automatic product question answering.
In this paper, we focus on the issue of answerability and answer reliability for PQA using reviews.
We propose a conformal prediction based framework to improve the reliability of PQA systems.
- Score: 20.821416803824295
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Promptly and accurately answering questions on products is important for
e-commerce applications. Manually answering product questions (e.g. on
community question answering platforms) results in slow response and does not
scale. Recent studies show that product reviews are a good source for
real-time, automatic product question answering (PQA). In the literature, PQA
is formulated as a retrieval problem with the goal to search for the most
relevant reviews to answer a given product question. In this paper, we focus on
the issue of answerability and answer reliability for PQA using reviews. Our
investigation is based on the intuition that many questions may not be
answerable with a finite set of reviews. When a question is not answerable, a
system should return nil answers rather than providing a list of irrelevant
reviews, which can have significant negative impact on user experience.
Moreover, for answerable questions, only the most relevant reviews that answer
the question should be included in the result. We propose a conformal
prediction based framework to improve the reliability of PQA systems, where we
reject unreliable answers so that the returned results are more concise and
accurate at answering the product question, including returning nil answers for
unanswerable questions. Experiments on a widely used Amazon dataset show
encouraging results of our proposed framework. More broadly, our results
demonstrate a novel and effective application of conformal methods to a
retrieval task.
Related papers
- I Could've Asked That: Reformulating Unanswerable Questions [89.93173151422636]
We evaluate open-source and proprietary models for reformulating unanswerable questions.
GPT-4 and Llama2-7B successfully reformulate questions only 26% and 12% of the time, respectively.
We publicly release the benchmark and the code to reproduce the experiments.
arXiv Detail & Related papers (2024-07-24T17:59:07Z) - Controllable Decontextualization of Yes/No Question and Answers into
Factual Statements [28.02936811004903]
We address the problem of controllable rewriting of answers to polar questions into decontextualized and succinct factual statements.
We propose a Transformer sequence to sequence model that utilizes soft-constraints to ensure controllable rewriting.
arXiv Detail & Related papers (2024-01-18T07:52:12Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Answering Unanswered Questions through Semantic Reformulations in Spoken
QA [20.216161323866867]
Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems.
We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity.
We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering.
arXiv Detail & Related papers (2023-05-27T07:19:27Z) - RealTime QA: What's the Answer Right Now? [137.04039209995932]
We introduce REALTIME QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis.
We build strong baseline models upon large pretrained language models, including GPT-3 and T5.
GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer.
arXiv Detail & Related papers (2022-07-27T07:26:01Z) - Improving the Question Answering Quality using Answer Candidate
Filtering based on Natural-Language Features [117.44028458220427]
We address the problem of how the Question Answering (QA) quality of a given system can be improved.
Our main contribution is an approach capable of identifying wrong answers provided by a QA system.
In particular, our approach has shown its potential while removing in many cases the majority of incorrect answers.
arXiv Detail & Related papers (2021-12-10T11:09:44Z) - Answering Product-Questions by Utilizing Questions from Other
Contextually Similar Products [7.220014320991269]
We propose a novel and complementary approach for predicting the answer for subjective and opinion-based questions.
We measure the contextual similarity between products based on the answers they provide for the same question.
A mixture-of-expert framework is used to predict the answer by aggregating the answers from contextually similar products.
arXiv Detail & Related papers (2021-05-19T07:05:00Z) - GooAQ: Open Question Answering with Diverse Answer Types [63.06454855313667]
We present GooAQ, a large-scale dataset with a variety of answer types.
This dataset contains over 5 million questions and 3 million answers collected from Google.
arXiv Detail & Related papers (2021-04-18T05:40:39Z) - Meaningful Answer Generation of E-Commerce Question-Answering [77.89755281215079]
In e-commerce portals, generating answers for product-related questions has become a crucial task.
In this paper, we propose a novel generative neural model, called the Meaningful Product Answer Generator (MPAG)
MPAG alleviates the safe answer problem by taking product reviews, product attributes, and a prototype answer into consideration.
arXiv Detail & Related papers (2020-11-14T14:05:30Z) - Opinion-aware Answer Generation for Review-driven Question Answering in
E-Commerce [39.08269647808958]
rich information about personal opinions in product reviews is underutilized in current generation-based review-driven QA studies.
In this paper, we tackle opinion-aware answer generation by jointly learning answer generation and opinion mining tasks with a unified model.
Experimental results show that our method achieves superior performance in real-world E-Commerce QA datasets.
arXiv Detail & Related papers (2020-08-27T07:54:45Z) - Review-guided Helpful Answer Identification in E-commerce [38.276241153439955]
Product-specific community question answering platforms can greatly help address the concerns of potential customers.
The user-provided answers on such platforms often vary a lot in their qualities.
Helpfulness votes from the community can indicate the overall quality of the answer, but they are often missing.
arXiv Detail & Related papers (2020-03-13T11:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.