Product Question Answering in E-Commerce: A Survey
- URL: http://arxiv.org/abs/2302.08092v2
- Date: Wed, 3 May 2023 07:29:41 GMT
- Title: Product Question Answering in E-Commerce: A Survey
- Authors: Yang Deng, Wenxuan Zhang, Qian Yu, Wai Lam
- Abstract summary: Product question answering (PQA) aims to automatically provide instant responses to customer's questions in E-Commerce platforms.
PQA exhibits unique challenges such as the subjectivity and reliability of user-generated contents.
This paper systematically reviews existing research efforts on PQA.
- Score: 43.12949215659755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Product question answering (PQA), aiming to automatically provide instant
responses to customer's questions in E-Commerce platforms, has drawn increasing
attention in recent years. Compared with typical QA problems, PQA exhibits
unique challenges such as the subjectivity and reliability of user-generated
contents in E-commerce platforms. Therefore, various problem settings and novel
methods have been proposed to capture these special characteristics. In this
paper, we aim to systematically review existing research efforts on PQA.
Specifically, we categorize PQA studies into four problem settings in terms of
the form of provided answers. We analyze the pros and cons, as well as present
existing datasets and evaluation protocols for each setting. We further
summarize the most significant challenges that characterize PQA from general QA
applications and discuss their corresponding solutions. Finally, we conclude
this paper by providing the prospect on several future directions.
Related papers
- KaPQA: Knowledge-Augmented Product Question-Answering [59.096607961704656]
We introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products.
We also propose a novel knowledge-driven RAG-QA framework to enhance the performance of the models in the product QA task.
arXiv Detail & Related papers (2024-07-22T22:14:56Z) - PACIFIC: Towards Proactive Conversational Question Answering over
Tabular and Textual Data in Finance [96.06505049126345]
We present a new dataset, named PACIFIC. Compared with existing CQA datasets, PACIFIC exhibits three key features: (i) proactivity, (ii) numerical reasoning, and (iii) hybrid context of tables and text.
A new task is defined accordingly to study Proactive Conversational Question Answering (PCQA), which combines clarification question generation and CQA.
UniPCQA performs multi-task learning over all sub-tasks in PCQA and incorporates a simple ensemble strategy to alleviate the error propagation issue in the multi-task learning by cross-validating top-$k$ sampled Seq2Seq
arXiv Detail & Related papers (2022-10-17T08:06:56Z) - RealTime QA: What's the Answer Right Now? [137.04039209995932]
We introduce REALTIME QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis.
We build strong baseline models upon large pretrained language models, including GPT-3 and T5.
GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer.
arXiv Detail & Related papers (2022-07-27T07:26:01Z) - ProQA: Structural Prompt-based Pre-training for Unified Question
Answering [84.59636806421204]
ProQA is a unified QA paradigm that solves various tasks through a single model.
It concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task.
ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios.
arXiv Detail & Related papers (2022-05-09T04:59:26Z) - The University of Texas at Dallas HLTRI's Participation in EPIC-QA:
Searching for Entailed Questions Revealing Novel Answer Nuggets [1.0957528713294875]
This paper describes our participation in both tasks of EPIC-QA, targeting: (1) Expert QA and (2) Consumer QA.
Our methods used a multi-phase neural Information Retrieval (IR) system based on combining BM25, BERT, and T5 as well as the idea of considering entailment relations between the original question and questions automatically generated from answer candidate sentences.
Our system, called SEaRching for Entailed QUestions revealing NOVel nuggets of Answers (SER4EQUNOVA), produced promising results in both EPIC-QA tasks, excelling in the Expert QA task.
arXiv Detail & Related papers (2021-12-28T00:14:46Z) - Conversational Question Answering: A Survey [18.447856993867788]
This survey is an effort to present a comprehensive review of the state-of-the-art research trends of Conversational Question Answering (CQA)
Our findings show that there has been a trend shift from single-turn to multi-turn QA which empowers the field of Conversational AI from different perspectives.
arXiv Detail & Related papers (2021-06-02T01:06:34Z) - NoiseQA: Challenge Set Evaluation for User-Centric Question Answering [68.67783808426292]
We show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error.
We conclude that there is substantial room for progress before QA systems can be effectively deployed.
arXiv Detail & Related papers (2021-02-16T18:35:29Z) - Biomedical Question Answering: A Comprehensive Review [19.38459023509541]
Question Answering (QA) is a benchmark Natural Language Processing (NLP) task where models predict the answer for a given question using related documents, images, knowledge bases and question-answer pairs.
For specific domains like biomedicine, QA systems are still rarely used in real-life settings.
Biomedical QA (BQA), as an emerging QA task, enables innovative applications to effectively perceive, access and understand complex biomedical knowledge.
arXiv Detail & Related papers (2021-02-10T06:16:35Z) - Summary-Oriented Question Generation for Informational Queries [23.72999724312676]
We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate.
Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4).
We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
arXiv Detail & Related papers (2020-10-19T17:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.