Teddy: A System for Interactive Review Analysis
- URL: http://arxiv.org/abs/2001.05171v1
- Date: Wed, 15 Jan 2020 08:19:01 GMT
- Title: Teddy: A System for Interactive Review Analysis
- Authors: Xiong Zhang and Jonathan Engel and Sara Evensen and Yuliang Li and
\c{C}a\u{g}atay Demiralp and Wang-Chiew Tan
- Abstract summary: Data scientists analyze reviews by developing rules and models to extract, aggregate, and understand information embedded in the review text.
Teddy is an interactive system that enables data scientists to quickly obtain insights from reviews and improve their extraction and modeling pipelines.
- Score: 17.53582677866512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reviews are integral to e-commerce services and products. They contain a
wealth of information about the opinions and experiences of users, which can
help better understand consumer decisions and improve user experience with
products and services. Today, data scientists analyze reviews by developing
rules and models to extract, aggregate, and understand information embedded in
the review text. However, working with thousands of reviews, which are
typically noisy incomplete text, can be daunting without proper tools. Here we
first contribute results from an interview study that we conducted with fifteen
data scientists who work with review text, providing insights into their
practices and challenges. Results suggest data scientists need interactive
systems for many review analysis tasks. In response we introduce Teddy, an
interactive system that enables data scientists to quickly obtain insights from
reviews and improve their extraction and modeling pipelines.
Related papers
- A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Natural Language Processing in Customer Service: A Systematic Review [0.0]
Review aims to examine existing research on the use of NLP technology in customer service.
Includes papers from five major scientific databases.
Twitter was the second most commonly used dataset.
arXiv Detail & Related papers (2022-12-16T18:17:07Z) - 5-Star Hotel Customer Satisfaction Analysis Using Hybrid Methodology [0.0]
Our research suggests a new way to find factors for customer satisfaction through review data.
Unlike many studies on customer satisfaction that have been conducted in the past, our research has a novelty of the thesis.
arXiv Detail & Related papers (2022-09-26T04:53:10Z) - Beyond Opinion Mining: Summarizing Opinions of Customer Reviews [20.534293365703427]
This three-hour tutorial will provide a comprehensive overview over major advances in opinion summarization.
The listeners will be well-equipped with the knowledge that is both useful for research and practical applications.
arXiv Detail & Related papers (2022-06-03T12:43:40Z) - SIFN: A Sentiment-aware Interactive Fusion Network for Review-based Item
Recommendation [48.1799451277808]
We propose a Sentiment-aware Interactive Fusion Network (SIFN) for review-based item recommendation.
We first encode user/item reviews via BERT and propose a light-weighted sentiment learner to extract semantic features of each review.
Then, we propose a sentiment prediction task that guides the sentiment learner to extract sentiment-aware features via explicit sentiment labels.
arXiv Detail & Related papers (2021-08-18T08:04:38Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Are Top School Students More Critical of Their Professors? Mining
Comments on RateMyProfessor.com [83.2634062100579]
Student reviews and comments on RateMyProfessor.com reflect realistic learning experiences of students.
Our study proves that student reviews and comments contain crucial information and can serve as essential references for enrollment in courses and universities.
arXiv Detail & Related papers (2021-01-23T20:01:36Z) - Sentiment Analysis on Customer Responses [0.0]
We present a customer feedback reviews on product, where we utilize opinion mining, text mining and sentiments.
This research paper provides you with sentimental analysis of various smart phone opinions on smart phones dividing them Positive, Negative and Neutral Behaviour.
arXiv Detail & Related papers (2020-07-05T04:50:40Z) - Topic Detection and Summarization of User Reviews [6.779855791259679]
We propose an effective new summarization method by analyzing both reviews and summaries.
A new dataset comprising product reviews and summaries about 1028 products are collected from Amazon and CNET.
arXiv Detail & Related papers (2020-05-30T02:19:08Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.