Explanation Needs in App Reviews: Taxonomy and Automated Detection
- URL: http://arxiv.org/abs/2307.04367v1
- Date: Mon, 10 Jul 2023 06:48:01 GMT
- Title: Explanation Needs in App Reviews: Taxonomy and Automated Detection
- Authors: Max Unterbusch, Mersedeh Sadeghi, Jannik Fischbach, Martin Obaidi,
Andreas Vogelsang
- Abstract summary: We explore the need for explanation expressed by users in app reviews.
We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs.
Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%.
- Score: 2.545133021829296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability, i.e. the ability of a system to explain its behavior to
users, has become an important quality of software-intensive systems. Recent
work has focused on methods for generating explanations for various algorithmic
paradigms (e.g., machine learning, self-adaptive systems). There is relatively
little work on what situations and types of behavior should be explained. There
is also a lack of support for eliciting explainability requirements. In this
work, we explore the need for explanation expressed by users in app reviews. We
manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of
Explanation Needs. We also explore several approaches to automatically identify
Explanation Needs in app reviews. Our best classifier identifies Explanation
Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%.
Our work contributes to a better understanding of users' Explanation Needs.
Automated tools can help engineers focus on these needs and ultimately elicit
valid Explanation Needs.
Related papers
- From App Features to Explanation Needs: Analyzing Correlations and Predictive Potential [2.2139415366377375]
This study investigates whether explanation needs, classified from user reviews, can be predicted based on app properties.<n>We analyzed a gold standard dataset of 4,495 app reviews enriched with metadata.
arXiv Detail & Related papers (2025-08-05T19:46:13Z) - Automatic Generation of Explainability Requirements and Software Explanations From User Reviews [2.2392379251177696]
This work contributes to the advancement of explainability requirements in software systems by introducing an automated approach to derive requirements from user reviews and generate corresponding explanations.<n>We created a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations.<n>Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style.
arXiv Detail & Related papers (2025-07-10T00:03:36Z) - Do Users' Explainability Needs in Software Change with Mood? [2.42509778995617]
We investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation.
We conclude that the need for explanations is very subjective and does only partially depend on objective factors.
arXiv Detail & Related papers (2025-02-10T15:12:41Z) - Automating Explanation Need Management in App Reviews: A Case Study from the Navigation App Industry [1.6431822728701062]
This paper proposes a semi-automated approach to managing explanation needs in user reviews.
The approach leverages taxonomy categories to classify reviews and assign them to relevant internal teams or sources for responses.
2,366 app reviews from the Google Play Store and Apple App Store were scraped and analyzed using a word and phrase filtering system to detect explanation needs.
arXiv Detail & Related papers (2025-01-14T12:57:16Z) - Explanations in Everyday Software Systems: Towards a Taxonomy for Explainability Needs [1.4503034354870523]
We present the results of an online survey with 84 participants.
We identified and classified 315 explainability needs from the survey answers.
We present two major contributions of this work.
arXiv Detail & Related papers (2024-04-25T14:34:10Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explanation: An Empirical Study on Explanation in Code Reviews [17.005837826213416]
We study the types of explanations used in code reviews and explore the potential of Large Language Models (LLMs)
We extracted 793 code review comments from Gerrit and manually labeled them based on whether they contained a suggestion, an explanation, or both.
Our analysis shows that 42% of comments only include suggestions without explanations.
arXiv Detail & Related papers (2023-11-15T15:08:38Z) - Explaining $\mathcal{ELH}$ Concept Descriptions through Counterfactual
Reasoning [3.5323691899538128]
An intrinsically transparent way to do classification is by using concepts in description logics.
One solution is to employ counterfactuals to answer the question, How must feature values be changed to obtain a different classification?''
arXiv Detail & Related papers (2023-01-12T16:06:06Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Search Methods for Sufficient, Socially-Aligned Feature Importance
Explanations with In-Distribution Counterfactuals [72.00815192668193]
Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time.
We study several under-explored dimensions of FI-based explanations, providing conceptual and empirical improvements for this form of explanation.
arXiv Detail & Related papers (2021-06-01T20:36:48Z) - Brain-inspired Search Engine Assistant based on Knowledge Graph [53.89429854626489]
DeveloperBot is a brain-inspired search engine assistant named on knowledge graph.
It constructs a multi-layer query graph by splitting a complex multi-constraint query into several ordered constraints.
It then models the constraint reasoning process as subgraph search process inspired by the spreading activation model of cognitive science.
arXiv Detail & Related papers (2020-12-25T06:36:11Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - A Knowledge Driven Approach to Adaptive Assistance Using Preference
Reasoning and Explanation [3.8673630752805432]
We propose the robot uses Analogical Theory of Mind to infer what the user is trying to do.
If the user is unsure or confused, the robot provides the user with an explanation.
arXiv Detail & Related papers (2020-12-05T00:18:43Z) - Comparative Sentiment Analysis of App Reviews [0.0]
This study aims to perform the sentiment classification of the app reviews and identify the university students' behavior towards the app market.
We applied machine learning algorithms using the TF-IDF text representation scheme and the performance was evaluated on the ensemble learning method.
arXiv Detail & Related papers (2020-06-17T09:28:07Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.