Agile Requirement Change Management Model for Global Software
Development
- URL: http://arxiv.org/abs/2402.14595v1
- Date: Thu, 22 Feb 2024 14:46:37 GMT
- Title: Agile Requirement Change Management Model for Global Software
Development
- Authors: Neha Koulecar and Bachan Ghimire
- Abstract summary: We propose a noble, comprehensive and robust agile requirements change management (ARCM) model that addresses the limitations of existing models.
Our study evaluated the effectiveness of the proposed RCM model in a real-world setting and identifies any limitations or areas for improvement.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a noble, comprehensive and robust agile requirements change
management (ARCM) model that addresses the limitations of existing models and
is tailored for agile software development in the global software development
paradigm. To achieve this goal, we conducted an exhaustive literature review
and an empirical study with RCM industry experts. Our study evaluated the
effectiveness of the proposed RCM model in a real-world setting and identifies
any limitations or areas for improvement. The results of our study provide
valuable insights into how the proposed RCM model can be applied in agile
global software development environments to improve software development
practices and optimize project success rates.
Related papers
- Large Language Models for Code Generation: The Practitioners Perspective [4.946128083535776]
Large Language Models (LLMs) have emerged as coding assistants, capable of generating source code from natural language prompts.
We propose and develop a multi-model unified platform to generate and execute code based on natural language prompts.
We conducted a survey with 60 software practitioners from 11 countries across four continents to evaluate the usability, performance, strengths, and limitations of each model.
arXiv Detail & Related papers (2025-01-28T14:52:16Z) - The Role of DevOps in Enhancing Enterprise Software Delivery Success through R&D Efficiency and Source Code Management [0.4532517021515834]
This study focuses on enhancing R&D efficiency and source code management (SCM) for software delivery success.
Using a qualitative methodology, data were collected from case studies of large-scale enterprises implementing DevOps.
arXiv Detail & Related papers (2024-11-04T16:01:43Z) - On the Utility of Domain Modeling Assistance with Large Language Models [2.874893537471256]
This paper presents a study to evaluate the usefulness of a novel approach utilizing large language models (LLMs) and few-shot prompt learning to assist in domain modeling.
The aim of this approach is to overcome the need for extensive training of AI-based completion models on scarce domain-specific datasets.
arXiv Detail & Related papers (2024-10-16T13:55:34Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Prompting Large Language Models to Tackle the Full Software Development Lifecycle: A Case Study [72.24266814625685]
We explore the performance of large language models (LLMs) across the entire software development lifecycle with DevEval.
DevEval features four programming languages, multiple domains, high-quality data collection, and carefully designed and verified metrics for each task.
Empirical studies show that current LLMs, including GPT-4, fail to solve the challenges presented within DevEval.
arXiv Detail & Related papers (2024-03-13T15:13:44Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - Xcrum: A Synergistic Approach Integrating Extreme Programming with Scrum [0.0]
This article aims to provide an overview of two prominent Agile methodologies: Scrum and Extreme Programming (XP)
The integration of XP practices into Scrum has given rise to a novel hybrid methodology known as "Xcrum"
It should be highlighted that, given this new approach's incorporation of the strengths of both methods, it holds the potential to outperform the original frameworks.
arXiv Detail & Related papers (2023-10-05T01:39:10Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Ensemble Regression Models for Software Development Effort Estimation: A
Comparative Study [0.0]
This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates.
The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation.
arXiv Detail & Related papers (2020-07-03T14:40:41Z) - Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation [80.49040344355431]
This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
arXiv Detail & Related papers (2020-06-15T23:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.