On the Automated Processing of User Feedback
- URL: http://arxiv.org/abs/2407.15519v1
- Date: Mon, 22 Jul 2024 10:13:13 GMT
- Title: On the Automated Processing of User Feedback
- Authors: Walid Maalej, Volodymyr Biryuk, Jialiang Wei, Fabian Panse,
- Abstract summary: User feedback is an increasingly important source of information for requirements engineering, user interface design, and software engineering.
To tap the full potential of feedback, there are two main challenges that need to be solved.
Vendors must cope with a large quantity of feedback data, which is hard to manage manually.
Second, vendors must also cope with a varying quality of feedback as some items might be uninformative, repetitive, or simply wrong.
- Score: 7.229732269884235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User feedback is becoming an increasingly important source of information for requirements engineering, user interface design, and software engineering in general. Nowadays, user feedback is largely available and easily accessible in social media, product forums, or app stores. Over the last decade, research has shown that user feedback can help software teams: a) better understand how users are actually using specific product features and components, b) faster identify, reproduce, and fix defects, and b) get inspirations for improvements or new features. However, to tap the full potential of feedback, there are two main challenges that need to be solved. First, software vendors must cope with a large quantity of feedback data, which is hard to manage manually. Second, vendors must also cope with a varying quality of feedback as some items might be uninformative, repetitive, or simply wrong. This chapter summarises and pipelines various data mining, machine learning, and natural language processing techniques, including recent Large Language Models, to cope with the quantity and quality challenges. We guide researchers and practitioners through implementing effective, actionable analysis of user feedback for software and requirements engineering.
Related papers
- GUI Agents: A Survey [129.94551809688377]
Graphical User Interface (GUI) agents, powered by Large Foundation Models, have emerged as a transformative approach to automating human-computer interaction.
Motivated by the growing interest and fundamental importance of GUI agents, we provide a comprehensive survey that categorizes their benchmarks, evaluation metrics, architectures, and training methods.
arXiv Detail & Related papers (2024-12-18T04:48:28Z) - You're (Not) My Type -- Can LLMs Generate Feedback of Specific Types for Introductory Programming Tasks? [0.4779196219827508]
This paper aims to generate specific types of feedback for programming tasks using Large Language Models (LLMs)
We revisit existing feedback to capture the specifics of the generated feedback, such as randomness, uncertainty, and degrees of variation.
Results have implications for future feedback research with regard to, for example, feedback effects and learners' informational needs.
arXiv Detail & Related papers (2024-12-04T17:57:39Z) - AllHands: Ask Me Anything on Large-scale Verbatim Feedback via Large Language Models [34.82568259708465]
Allhands is an innovative analytic framework designed for large-scale feedback analysis through a natural language interface.
LLMs are large language models that enhance accuracy, robustness, generalization, and user-friendliness.
Allhands delivers comprehensive multi-modal responses, including text, code, tables, and images.
arXiv Detail & Related papers (2024-03-22T12:13:16Z) - Extracting Self-Consistent Causal Insights from Users Feedback with LLMs
and In-context Learning [11.609805521822878]
Microsoft Windows Feedback Hub is designed to receive customer feedback on a wide variety of subjects including critical topics such as power and battery.
To better understand and triage issues, we leverage Double Machine Learning (DML) to associate users' feedback with telemetry signals.
Our approach is able to extract previously known issues, uncover new bugs, and identify sequences of events that lead to a bug.
arXiv Detail & Related papers (2023-12-11T20:12:46Z) - Unveiling Inclusiveness-Related User Feedback in Mobile Applications [7.212232917917022]
We leverage user feedback from Reddit, Google Play Store, and X for 50 of the most popular apps in the world.
Using a Socio-Technical Grounded Theory approach, we analyzed 22,000 posts across the three sources.
We organize our results in a taxonomy for inclusiveness comprising 5 major categories: Algorithmic Bias, Technology, Demography, Accessibility, and Other Human Values.
arXiv Detail & Related papers (2023-11-02T04:05:46Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Mining Reddit Data to Elicit Students' Requirements During COVID-19
Pandemic [2.5475486924467075]
We propose a shift in requirements elicitation, focusing on gathering feedback related to the problem itself.
We conducted a case study on student requirements during the COVID-19 pandemic in a higher education institution.
We employed multiple machine-learning and natural language processing techniques to identify requirement sentences.
arXiv Detail & Related papers (2023-07-26T14:26:16Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.