Identifying Characteristics of the Agile Development Process That Impact
User Satisfaction
- URL: http://arxiv.org/abs/2306.03483v1
- Date: Tue, 6 Jun 2023 08:09:14 GMT
- Title: Identifying Characteristics of the Agile Development Process That Impact
User Satisfaction
- Authors: Minshun Yang, Seiji Sato, Hironori Washizaki, Yoshiaki Fukazawa,
Juichi Takahashi
- Abstract summary: The purpose of this study is to identify the characteristics of Agile development processes that impact user satisfaction.
No metrics conclusively indicate an improved user satisfaction, motivation of the development team, the ability to set appropriate work units, the appropriateness of work rules, and the improvement of code maintainability should be considered as they are correlated with improved user satisfaction.
- Score: 3.3748063434734843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of this study is to identify the characteristics of Agile
development processes that impact user satisfaction. We used user reviews of
OSS smartphone apps and various data from version control systems to examine
the relationships, especially time-series correlations, between user
satisfaction and development metrics that are expected to be related to user
satisfaction. Although no metrics conclusively indicate an improved user
satisfaction, motivation of the development team, the ability to set
appropriate work units, the appropriateness of work rules, and the improvement
of code maintainability should be considered as they are correlated with
improved user satisfaction. In contrast, changes in the release frequency and
workload are not correlated.
Related papers
- Are We Solving a Well-Defined Problem? A Task-Centric Perspective on Recommendation Tasks [46.705107776194616]
We analyze RecSys task formulations, emphasizing key components such as input-output structures, temporal dynamics, and candidate item selection.
We explore the balance between task specificity and model generalizability, highlighting how well-defined task formulations serve as the foundation for robust evaluation and effective solution development.
arXiv Detail & Related papers (2025-03-27T06:10:22Z) - Human-AI Interaction and User Satisfaction: Empirical Evidence from Online Reviews of AI Products [0.0]
This study analyzes over 100,000 user reviews of AI-related products from G2, a leading review platform for business software and services.
We identify seven core HAI dimensions and examine their coverage and sentiment within the reviews.
We find that the sentiment on four HAI dimensions-adaptability, customization, error recovery, and security-is positively associated with overall user satisfaction.
arXiv Detail & Related papers (2025-03-23T06:16:49Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - CAUSE: Counterfactual Assessment of User Satisfaction Estimation in Task-Oriented Dialogue Systems [60.27663010453209]
We leverage large language models (LLMs) to generate satisfaction-aware counterfactual dialogues.
We gather human annotations to ensure the reliability of the generated samples.
Our results shed light on the need for data augmentation approaches for user satisfaction estimation in TOD systems.
arXiv Detail & Related papers (2024-03-27T23:45:31Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - Large Language Model Meets Graph Neural Network in Knowledge Distillation [7.686812700685084]
We propose a temporal-aware framework for predicting Quality of Service (QoS) in service-oriented architectures.
Our proposed TOGCL framework significantly outperforms state-of-the-art methods across multiple metrics, achieving improvements of up to 38.80%.
arXiv Detail & Related papers (2024-02-08T18:33:21Z) - The Impact of Performance Expectancy, Workload, Risk, and Satisfaction
on Trust in ChatGPT: Cross-sectional Survey Analysis [1.9580473532948401]
This study investigated how perceived workload, satisfaction, performance expectancy, and risk-benefit perception influenced users' trust in Chat Generative Pre-Trained Transformer (ChatGPT)
A semi-structured, web-based survey was conducted among adults in the United States who actively use ChatGPT at least once a month.
arXiv Detail & Related papers (2023-10-20T16:06:11Z) - Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for
Test-Time Policy Adaptation [20.266695694005943]
Policies often fail due to distribution shift -- changes in the state and reward that occur when a policy is deployed in new environments.
Data augmentation can increase robustness by making the model invariant to task-irrelevant changes in the agent's observation.
We propose an interactive framework to leverage feedback directly from the user to identify personalized task-irrelevant concepts.
arXiv Detail & Related papers (2023-07-12T17:55:08Z) - Schema-Guided User Satisfaction Modeling for Task-Oriented Dialogues [38.251046341024455]
We propose SG-USM, a novel schema-guided user satisfaction modeling framework.
It explicitly models the degree to which the user's preferences regarding the task attributes are fulfilled by the system for predicting the user's satisfaction level.
arXiv Detail & Related papers (2023-05-26T10:19:30Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - Personalizing Intervened Network for Long-tailed Sequential User
Behavior Modeling [66.02953670238647]
Tail users suffer from significantly lower-quality recommendation than the head users after joint training.
A model trained on tail users separately still achieve inferior results due to limited data.
We propose a novel approach that significantly improves the recommendation performance of the tail users.
arXiv Detail & Related papers (2022-08-19T02:50:19Z) - Personalization in Human-AI Teams: Improving the Compatibility-Accuracy
Tradeoff [0.0]
We study the trade-off between improving the system's accuracy following an update and the compatibility of the updated system with prior user experience.
We show that by personalizing the loss function to specific users, in some cases it is possible to improve the compatibility-accuracy trade-off with respect to these users.
arXiv Detail & Related papers (2020-04-05T19:35:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.