Natural Language Processing in Customer Service: A Systematic Review
- URL: http://arxiv.org/abs/2212.09523v1
- Date: Fri, 16 Dec 2022 18:17:07 GMT
- Title: Natural Language Processing in Customer Service: A Systematic Review
- Authors: Malak Mashaabi, Areej Alotaibi, Hala Qudaih, Raghad Alnashwan and Hend
Al-Khalifa
- Abstract summary: Review aims to examine existing research on the use of NLP technology in customer service.
Includes papers from five major scientific databases.
Twitter was the second most commonly used dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Artificial intelligence and natural language processing (NLP) are
increasingly being used in customer service to interact with users and answer
their questions. The goal of this systematic review is to examine existing
research on the use of NLP technology in customer service, including the
research domain, applications, datasets used, and evaluation methods. The
review also looks at the future direction of the field and any significant
limitations. The review covers the time period from 2015 to 2022 and includes
papers from five major scientific databases. Chatbots and question-answering
systems were found to be used in 10 main fields, with the most common use in
general, social networking, and e-commerce areas. Twitter was the second most
commonly used dataset, with most research also using their own original
datasets. Accuracy, precision, recall, and F1 were the most common evaluation
methods. Future work aims to improve the performance and understanding of user
behavior and emotions, and address limitations such as the volume, diversity,
and quality of datasets. This review includes research on different spoken
languages and models and techniques.
Related papers
- Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in the Era of Large Language Models [0.0]
Open Domain Question Answering (ODQA) within natural language processing involves building systems that answer factual questions using large-scale knowledge corpora.
High-quality datasets are used to train models on realistic scenarios.
Standardized metrics facilitate comparisons between different ODQA systems.
arXiv Detail & Related papers (2024-06-19T05:43:02Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - 5-Star Hotel Customer Satisfaction Analysis Using Hybrid Methodology [0.0]
Our research suggests a new way to find factors for customer satisfaction through review data.
Unlike many studies on customer satisfaction that have been conducted in the past, our research has a novelty of the thesis.
arXiv Detail & Related papers (2022-09-26T04:53:10Z) - Federated Learning Meets Natural Language Processing: A Survey [12.224792145700562]
Federated Learning aims to learn machine learning models from multiple decentralized edge devices (e.g. mobiles) or servers without sacrificing local data privacy.
Recent Natural Language Processing techniques rely on deep learning and large pre-trained language models.
arXiv Detail & Related papers (2021-07-27T05:07:48Z) - Can I Be of Further Assistance? Using Unstructured Knowledge Access to
Improve Task-oriented Conversational Modeling [39.60614611655266]
This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources.
We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances.
arXiv Detail & Related papers (2021-06-16T23:31:42Z) - Domain Generalization: A Survey [146.68420112164577]
Domain generalization (DG) aims to achieve OOD generalization by only using source domain data for model learning.
For the first time, a comprehensive literature review is provided to summarize the ten-year development in DG.
arXiv Detail & Related papers (2021-03-03T16:12:22Z) - Advances and Challenges in Conversational Recommender Systems: A Survey [133.93908165922804]
We provide a systematic review of the techniques used in current conversational recommender systems (CRSs)
We summarize the key challenges of developing CRSs into five directions.
These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI)
arXiv Detail & Related papers (2021-01-23T08:53:15Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z) - Improving time use measurement with personal big data collection -- the
experience of the European Big Data Hackathon 2019 [62.997667081978825]
This article assesses the experience with i-Log at the European Big Data Hackathon 2019, a satellite event of the New Techniques and Technologies for Statistics (NTTS) conference, organised by Eurostat.
i-Log is a system that allows to capture personal big data from smartphones' internal sensors to be used for time use measurement.
arXiv Detail & Related papers (2020-04-24T18:40:08Z) - Teddy: A System for Interactive Review Analysis [17.53582677866512]
Data scientists analyze reviews by developing rules and models to extract, aggregate, and understand information embedded in the review text.
Teddy is an interactive system that enables data scientists to quickly obtain insights from reviews and improve their extraction and modeling pipelines.
arXiv Detail & Related papers (2020-01-15T08:19:01Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.