User Feedback in Continuous Software Engineering: Revealing the State-of-Practice
- URL: http://arxiv.org/abs/2410.07459v1
- Date: Wed, 9 Oct 2024 21:59:16 GMT
- Title: User Feedback in Continuous Software Engineering: Revealing the State-of-Practice
- Authors: Anastasiia Tkalich, Eriks Klotins, Tor Sporsem, Viktoria Stray, Nils Brede Moe, Astri Barbala,
- Abstract summary: Continuous engineering practices require a continuous feedback loop with input from customers and end-users.
The literature describing how practitioners work with user feedback in CSE, is limited.
We conduct a qualitative survey and report analysis from 21 interviews in 13 product development companies.
- Score: 3.151810331262745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Context: Organizations opt for continuous delivery of incremental updates to deal with uncertainty and minimize waste. However, applying continuous engineering (CSE) practices requires a continuous feedback loop with input from customers and end-users. Challenges: It becomes increasingly challenging to apply traditional requirements elicitation and validation techniques with ever-shrinking software delivery cycles. At the same time, frequent deliveries generate an abundance of usage data and telemetry informing engineering teams of end-user behavior. The literature describing how practitioners work with user feedback in CSE, is limited. Objectives: We aim to explore the state of practice related to utilization of user feedback in CSE. Specifically, what practices are used, how, and the shortcomings of these practices. Method: We conduct a qualitative survey and report analysis from 21 interviews in 13 product development companies. We apply thematic and cross-case analysis to interpret the data. Results: Based on our earlier work we suggest a conceptual model of how user feedback is utilized in CSE. We further report the identified challenges with the continuous collection and analysis of user feedback and identify implications for practice. Conclusions: Companies use a combination of qualitative and quantitative methods to infer end-user preferences. At the same time, continuous collection, analysis, interpretation, and use of data in decisions are problematic. The challenges pertain to selecting the right metrics and analysis techniques, resource allocation, and difficulties in accessing vaguely defined user groups. Our advice to practitioners in CSE is to ensure sufficient resources and effort for interpretation of the feedback, which can be facilitated by telemetry dashboards.
Related papers
- Transit Pulse: Utilizing Social Media as a Source for Customer Feedback and Information Extraction with Large Language Model [12.6020349733674]
We propose a novel approach to extracting and analyzing transit-related information.
Our method employs Large Language Models (LLM), specifically Llama 3, for a streamlined analysis.
Our results demonstrate the potential of LLMs to transform social media data analysis in the public transit domain.
arXiv Detail & Related papers (2024-10-19T07:08:40Z) - Preliminary Insights on Industry Practices for Addressing Fairness Debt [4.546982900370235]
This study explores how software professionals identify and address biases in AI systems within the software industry.
Our paper presents initial evidence on addressing fairness debt and provides a foundation for developing structured guidelines to manage fairness-related issues in AI systems.
arXiv Detail & Related papers (2024-09-04T04:18:42Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Efficient Real-world Testing of Causal Decision Making via Bayesian
Experimental Design for Contextual Optimisation [12.37745209793872]
We introduce a model-agnostic framework for gathering data to evaluate and improve contextual decision making.
Our method is used for the data-efficient evaluation of the regret of past treatment assignments.
arXiv Detail & Related papers (2022-07-12T01:20:11Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Using Voice and Biofeedback to Predict User Engagement during
Requirements Interviews [11.277063517143565]
We propose to utilize biometric data, in terms of physiological and voice features, to complement interviews with information about user engagement.
We evaluate our approach by interviewing users while gathering their physiological data using an Empatica E4 wristband.
Our results show that we can predict users' engagement by training supervised machine learning algorithms on biometric data.
arXiv Detail & Related papers (2021-04-06T10:34:36Z) - Advances and Challenges in Conversational Recommender Systems: A Survey [133.93908165922804]
We provide a systematic review of the techniques used in current conversational recommender systems (CRSs)
We summarize the key challenges of developing CRSs into five directions.
These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI)
arXiv Detail & Related papers (2021-01-23T08:53:15Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.