Agile Retrospectives: What went well? What didn't go well? What should we do?
- URL: http://arxiv.org/abs/2504.11780v1
- Date: Wed, 16 Apr 2025 05:33:35 GMT
- Title: Agile Retrospectives: What went well? What didn't go well? What should we do?
- Authors: Maria Spichkova, Hina Lee, Kevin Iwan, Madeleine Zwart, Yuwon Yoon, Xiaohan Qin,
- Abstract summary: In Agile/Scrum software development, the idea of retrospective meetings (retros) is one of the core elements of the project process.<n>We present our work in progress focusing on two aspects: analysis of potential usage of generative AI for information interaction within retrospective meetings, and visualisation of retros' information to software development teams.
- Score: 1.363420481690495
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In Agile/Scrum software development, the idea of retrospective meetings (retros) is one of the core elements of the project process. In this paper, we present our work in progress focusing on two aspects: analysis of potential usage of generative AI for information interaction within retrospective meetings, and visualisation of retros' information to software development teams. We also present our prototype tool RetroAI++, focusing on retros-related functionalities.
Related papers
- GenAI-Enabled Backlog Grooming in Agile Software Projects: An Empirical Study [2.9073118555228232]
This study investigates whether a generative-AI (GenAI) assistant can automate backlog grooming in Agile software projects without sacrificing accuracy or transparency.<n>We developed a Jira plug-in that embeds backlog issues with the vector database, detects duplicates via cosine similarity, and leverage the GPT-4o model to propose merges, deletions, or new issues.
arXiv Detail & Related papers (2025-07-14T19:22:57Z) - Advanced approach for Agile/Scrum Process: RetroAI++ [1.363420481690495]
We present our prototype tool RetroAI++, based on emerging intelligent technologies.<n>We aim to automate and refine the practical application of Agile/Scrum processes within Sprint Planning and Retrospectives.
arXiv Detail & Related papers (2025-06-18T06:38:43Z) - Playpen: An Environment for Exploring Learning Through Conversational Interaction [81.67330926729015]
We look at what extent synthetic interaction in what we call Dialogue Games can provide a learning signal.
We investigate the effects of supervised fine-tuning on this data.
We release the framework and the baseline training setups in the hope that this can foster research in this promising new direction.
arXiv Detail & Related papers (2025-04-11T14:49:33Z) - Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization [56.674356045200696]
We propose a novel method to train AI agents to incorporate knowledge and skills for multiple tasks without the need for cumbersome note systems or prior high-quality demonstration data.<n>Our approach employs an iterative process where the agent collects new experiences, receives corrective feedback from humans in the form of hints, and integrates this feedback into its weights.<n>We demonstrate the efficacy of our approach by implementing it in a Llama-3-based agent which, after only a few rounds of feedback, outperforms advanced models GPT-4o and DeepSeek-V3 in a taskset.
arXiv Detail & Related papers (2025-02-03T17:45:46Z) - You're (Not) My Type -- Can LLMs Generate Feedback of Specific Types for Introductory Programming Tasks? [0.4779196219827508]
This paper aims to generate specific types of feedback for programming tasks using Large Language Models (LLMs)<n>We revisit existing feedback to capture the specifics of the generated feedback, such as randomness, uncertainty, and degrees of variation.<n>Results have implications for future feedback research with regard to, for example, feedback effects and learners' informational needs.
arXiv Detail & Related papers (2024-12-04T17:57:39Z) - RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph [63.87660059104077]
We present RepoGraph, a plug-in module that manages a repository-level structure for modern AI software engineering solutions.<n>RepoGraph substantially boosts the performance of all systems, leading to a new state-of-the-art among open-source frameworks.
arXiv Detail & Related papers (2024-10-03T05:45:26Z) - Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward [9.177785129949]
We aim to better understand how specifically developers are using AI assistants.
We carried out a large-scale survey aimed at how AI assistants are used.
arXiv Detail & Related papers (2024-06-11T23:10:43Z) - Characterising Developer Sentiment in Software Components: An Exploratory Study of Gentoo [6.253919624802852]
Collaborative software development happens in teams, that cooperate on shared artefacts, and discuss development on online platforms.
Previous research has shown how communication between team members, especially in an open-source environment, can become extremely toxic.
Our study shows that, in recent years, negative emotions have generally decreased in the communication between Gentoo developers.
arXiv Detail & Related papers (2024-05-27T09:22:47Z) - How is Software Reuse Discussed in Stack Overflow? [12.586676749644342]
We present an empirical study of 1,409 posts to better understand the challenges developers face when reusing code.
Our findings show that 'visual studio' is the top occurring bigrams for question posts, and there are frequent design patterns utilized by developers for the purpose of reuse.
arXiv Detail & Related papers (2023-11-01T03:13:36Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Visual Programming for Text-to-Image Generation and Evaluation [73.12069620086311]
We propose two novel interpretable/explainable visual programming frameworks for text-to-image (T2I) generation and evaluation.
First, we introduce VPGen, an interpretable step-by-step T2I generation framework that decomposes T2I generation into three steps: object/count generation, layout generation, and image generation.
Second, we introduce VPEval, an interpretable and explainable evaluation framework for T2I generation based on visual programming.
arXiv Detail & Related papers (2023-05-24T16:42:17Z) - Transformer-Based Visual Segmentation: A Survey [118.01564082499948]
Visual segmentation seeks to partition images, video frames, or point clouds into multiple segments or groups.
Transformers are a type of neural network based on self-attention originally designed for natural language processing.
Transformers offer robust, unified, and even simpler solutions for various segmentation tasks.
arXiv Detail & Related papers (2023-04-19T17:59:02Z) - Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming [28.254978977288868]
We studied GitHub Copilot, a code-recommendation system used by millions of programmers daily.
We developed CUPS, a taxonomy of common programmer activities when interacting with Copilot.
Our insights reveal how programmers interact with Copilot and motivate new interface designs and metrics.
arXiv Detail & Related papers (2022-10-25T20:01:15Z) - Back to the Future: Unsupervised Backprop-based Decoding for
Counterfactual and Abductive Commonsense Reasoning [79.48769764508006]
generative language models (LMs) can be trained to condition only on the past context or to perform narrowly scoped text-infilling.
We propose DeLorean, a new unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts.
We demonstrate that our approach is general and applicable to two nonmonotonic reasoning tasks: abductive text generation and counterfactual story revision.
arXiv Detail & Related papers (2020-10-12T17:58:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.