On the Automated Processing of User Feedback
- URL: http://arxiv.org/abs/2407.15519v1
- Date: Mon, 22 Jul 2024 10:13:13 GMT
- Title: On the Automated Processing of User Feedback
- Authors: Walid Maalej, Volodymyr Biryuk, Jialiang Wei, Fabian Panse,
- Abstract summary: User feedback is an increasingly important source of information for requirements engineering, user interface design, and software engineering.
To tap the full potential of feedback, there are two main challenges that need to be solved.
Vendors must cope with a large quantity of feedback data, which is hard to manage manually.
Second, vendors must also cope with a varying quality of feedback as some items might be uninformative, repetitive, or simply wrong.
- Score: 7.229732269884235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User feedback is becoming an increasingly important source of information for requirements engineering, user interface design, and software engineering in general. Nowadays, user feedback is largely available and easily accessible in social media, product forums, or app stores. Over the last decade, research has shown that user feedback can help software teams: a) better understand how users are actually using specific product features and components, b) faster identify, reproduce, and fix defects, and b) get inspirations for improvements or new features. However, to tap the full potential of feedback, there are two main challenges that need to be solved. First, software vendors must cope with a large quantity of feedback data, which is hard to manage manually. Second, vendors must also cope with a varying quality of feedback as some items might be uninformative, repetitive, or simply wrong. This chapter summarises and pipelines various data mining, machine learning, and natural language processing techniques, including recent Large Language Models, to cope with the quantity and quality challenges. We guide researchers and practitioners through implementing effective, actionable analysis of user feedback for software and requirements engineering.
Related papers
- AllHands: Ask Me Anything on Large-scale Verbatim Feedback via Large Language Models [34.82568259708465]
Allhands is an innovative analytic framework designed for large-scale feedback analysis through a natural language interface.
LLMs are large language models that enhance accuracy, robustness, generalization, and user-friendliness.
Allhands delivers comprehensive multi-modal responses, including text, code, tables, and images.
arXiv Detail & Related papers (2024-03-22T12:13:16Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Extracting Self-Consistent Causal Insights from Users Feedback with LLMs
and In-context Learning [11.609805521822878]
Microsoft Windows Feedback Hub is designed to receive customer feedback on a wide variety of subjects including critical topics such as power and battery.
To better understand and triage issues, we leverage Double Machine Learning (DML) to associate users' feedback with telemetry signals.
Our approach is able to extract previously known issues, uncover new bugs, and identify sequences of events that lead to a bug.
arXiv Detail & Related papers (2023-12-11T20:12:46Z) - Inclusiveness Matters: A Large-Scale Analysis of User Feedback [7.8788463395442045]
We leverage user feedback from three popular online sources, Reddit, Google Play Store, and Twitter, for 50 of the most popular apps in the world.
Using a Socio-Technical Grounded Theory approach, we analyzed 23,107 posts across the three sources and identified 1,211 inclusiveness related posts.
Our study provides an in-depth view of inclusiveness-related user feedback from most popular apps and online sources.
arXiv Detail & Related papers (2023-11-02T04:05:46Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Mining Reddit Data to Elicit Students' Requirements During COVID-19
Pandemic [2.5475486924467075]
We propose a shift in requirements elicitation, focusing on gathering feedback related to the problem itself.
We conducted a case study on student requirements during the COVID-19 pandemic in a higher education institution.
We employed multiple machine-learning and natural language processing techniques to identify requirement sentences.
arXiv Detail & Related papers (2023-07-26T14:26:16Z) - A large language model-assisted education tool to provide feedback on
open-ended responses [2.624902795082451]
We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
arXiv Detail & Related papers (2023-07-25T19:49:55Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.