Automating Explanation Need Management in App Reviews: A Case Study from the Navigation App Industry
- URL: http://arxiv.org/abs/2501.08087v1
- Date: Tue, 14 Jan 2025 12:57:16 GMT
- Title: Automating Explanation Need Management in App Reviews: A Case Study from the Navigation App Industry
- Authors: Martin Obaidi, Nicolas Voß, Jakob Droste, Hannah Deters, Marc Herrmann, Jannik Fischbach, Kurt Schneider,
- Abstract summary: This paper proposes a semi-automated approach to managing explanation needs in user reviews.<n>The approach leverages taxonomy categories to classify reviews and assign them to relevant internal teams or sources for responses.<n>2,366 app reviews from the Google Play Store and Apple App Store were scraped and analyzed using a word and phrase filtering system to detect explanation needs.
- Score: 1.6431822728701062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing explanations in response to user reviews is a time-consuming and repetitive task for companies, as many reviews present similar issues requiring nearly identical responses. To improve efficiency, this paper proposes a semi-automated approach to managing explanation needs in user reviews. The approach leverages taxonomy categories to classify reviews and assign them to relevant internal teams or sources for responses. 2,366 app reviews from the Google Play Store and Apple App Store were scraped and analyzed using a word and phrase filtering system to detect explanation needs. The detected needs were categorized and assigned to specific internal teams at the company Graphmasters GmbH, using a hierarchical assignment strategy that prioritizes the most relevant teams. Additionally, external sources, such as existing support articles and past review responses, were integrated to provide comprehensive explanations. The system was evaluated through interviews and surveys with the Graphmasters support team, which consists of four employees. The results showed that the hierarchical assignment method improved the accuracy of team assignments, with correct teams being identified in 79.2% of cases. However, challenges in interrater agreement and the need for new responses in certain cases, particularly for Apple App Store reviews, were noted. Future work will focus on refining the taxonomy and enhancing the automation process to reduce manual intervention further.
Related papers
- What About Emotions? Guiding Fine-Grained Emotion Extraction from Mobile App Reviews [3.24647377768909]
Fine-grained emotion classification in app reviews remains underexplored.<n>Our study adapts Plutchik's emotion taxonomy to app reviews by developing a structured annotation framework and dataset.<n>We evaluate the feasibility of automating emotion annotation using large language models.
arXiv Detail & Related papers (2025-05-29T13:58:38Z) - AutoRev: Automatic Peer Review System for Academic Research Papers [9.269282930029856]
AutoRev is an Automatic Peer Review System for Academic Research Papers.<n>Our framework represents an academic document as a graph, enabling the extraction of the most critical passages.<n>When applied to review generation, our method outperforms SOTA baselines by an average of 58.72%.
arXiv Detail & Related papers (2025-05-20T13:59:58Z) - SEAGraph: Unveiling the Whole Story of Paper Review Comments [26.39115060771725]
In the traditional peer review process, authors often receive vague or insufficiently detailed feedback.<n>This raises the critical question of how to enhance authors' comprehension of review comments.<n>We present SEAGraph, a novel framework developed to clarify review comments by uncovering the underlying intentions.
arXiv Detail & Related papers (2024-12-16T16:24:36Z) - Incremental Extractive Opinion Summarization Using Cover Trees [81.59625423421355]
In online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically.
In this work, we study the task of extractive opinion summarization in an incremental setting.
We present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting.
arXiv Detail & Related papers (2024-01-16T02:00:17Z) - Code Review Automation: Strengths and Weaknesses of the State of the Art [14.313783664862923]
Three code review automation techniques tend to succeed or fail in two tasks described in this paper.
The study has a strong qualitative focus, with 105 man-hours of manual inspection invested in analyzing correct and wrong predictions.
arXiv Detail & Related papers (2024-01-10T13:00:18Z) - Explanation Needs in App Reviews: Taxonomy and Automated Detection [2.545133021829296]
We explore the need for explanation expressed by users in app reviews.
We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs.
Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%.
arXiv Detail & Related papers (2023-07-10T06:48:01Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Emerging App Issue Identification via Online Joint Sentiment-Topic
Tracing [66.57888248681303]
We propose a novel emerging issue detection approach named MERIT.
Based on the AOBST model, we infer the topics negatively reflected in user reviews for one app version.
Experiments on popular apps from Google Play and Apple's App Store demonstrate the effectiveness of MERIT.
arXiv Detail & Related papers (2020-08-23T06:34:05Z) - On the Social and Technical Challenges of Web Search Autosuggestion
Moderation [118.47867428272878]
Autosuggestions are typically generated by machine learning (ML) systems trained on a corpus of search logs and document representations.
While current search engines have become increasingly proficient at suppressing such problematic suggestions, there are still persistent issues that remain.
We discuss several dimensions of problematic suggestions, difficult issues along the pipeline, and why our discussion applies to the increasing number of applications beyond web search.
arXiv Detail & Related papers (2020-07-09T19:22:00Z) - Asking and Answering Questions to Evaluate the Factual Consistency of
Summaries [80.65186293015135]
We propose an automatic evaluation protocol called QAGS (pronounced "kags") to identify factual inconsistencies in a generated summary.
QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source.
We believe QAGS is a promising tool in automatically generating usable and factually consistent text.
arXiv Detail & Related papers (2020-04-08T20:01:09Z) - Automating App Review Response Generation [67.58267006314415]
We propose a novel approach RRGen that automatically generates review responses by learning knowledge relations between reviews and their responses.
Experiments on 58 apps and 309,246 review-response pairs highlight that RRGen outperforms the baselines by at least 67.4% in terms of BLEU-4.
arXiv Detail & Related papers (2020-02-10T05:23:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.