Enhancing Software Effort Estimation through Reinforcement Learning-based Project Management-Oriented Feature Selection
- URL: http://arxiv.org/abs/2403.16749v1
- Date: Mon, 25 Mar 2024 13:20:59 GMT
- Title: Enhancing Software Effort Estimation through Reinforcement Learning-based Project Management-Oriented Feature Selection
- Authors: Haoyang Chen, Botong Xu, Kaiyang Zhong,
- Abstract summary: This study investigates the application of the data element market in software project management.
It proposes a solution based on feature selection, utilizing the data element market and reinforcement learning-based algorithms.
- Score: 1.382553192164386
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: The study aims to investigate the application of the data element market in software project management, focusing on improving effort estimation by addressing challenges faced by traditional methods. Design/methodology/approach: This study proposes a solution based on feature selection, utilizing the data element market and reinforcement learning-based algorithms to enhance the accuracy of software effort estimation. It explores the application of the MARLFS algorithm, customizing improvements to the algorithm and reward function. Findings: This study demonstrates that the proposed approach achieves more precise estimation compared to traditional methods, leveraging feature selection to guide project management in software development. Originality/value: This study contributes to the field by offering a novel approach that combines the data element market, machine learning, and feature selection to improve software effort estimation, addressing limitations of traditional methods and providing insights for future research in project management.
Related papers
- EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - Leveraging Large Language Models for Predicting Cost and Duration in Software Engineering Projects [0.0]
This study introduces an innovative approach using Large Language Models (LLMs) to enhance the accuracy and usability of project cost predictions.
We explore the efficacy of LLMs against traditional methods and contemporary machine learning techniques.
This study aims to demonstrate that LLMs not only yield more accurate estimates but also offer a user-friendly alternative to complex predictive models.
arXiv Detail & Related papers (2024-09-15T05:35:52Z) - Collaborative Knowledge Infusion for Low-resource Stance Detection [83.88515573352795]
Target-related knowledge is often needed to assist stance detection models.
We propose a collaborative knowledge infusion approach for low-resource stance detection tasks.
arXiv Detail & Related papers (2024-03-28T08:32:14Z) - Leveraging AI for Enhanced Software Effort Estimation: A Comprehensive
Study and Framework Proposal [2.8643479919807433]
The study aims to improve accuracy and reliability by overcoming the limitations of traditional methods.
The proposed AI-based framework holds the potential to enhance project planning and resource allocation.
arXiv Detail & Related papers (2024-02-08T08:25:41Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv Detail & Related papers (2023-12-01T16:00:25Z) - Let's reward step by step: Step-Level reward model as the Navigators for
Reasoning [64.27898739929734]
Process-Supervised Reward Model (PRM) furnishes LLMs with step-by-step feedback during the training phase.
We propose a greedy search algorithm that employs the step-level feedback from PRM to optimize the reasoning pathways explored by LLMs.
To explore the versatility of our approach, we develop a novel method to automatically generate step-level reward dataset for coding tasks and observed similar improved performance in the code generation tasks.
arXiv Detail & Related papers (2023-10-16T05:21:50Z) - A Survey of Contextual Optimization Methods for Decision Making under
Uncertainty [47.73071218563257]
This review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations.
We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks.
arXiv Detail & Related papers (2023-06-17T15:21:02Z) - Recent Advances in Software Effort Estimation using Machine Learning [0.0]
We review the most recent machine learning approaches used to estimate software development efforts for both, non-agile and agile methodologies.
We analyze the benefits of adopting an agile methodology in terms of effort estimation possibilities.
We conclude with an analysis of current and future trends, regarding software effort estimation through data-driven predictive models.
arXiv Detail & Related papers (2023-03-06T20:25:16Z) - Efficient Real-world Testing of Causal Decision Making via Bayesian
Experimental Design for Contextual Optimisation [12.37745209793872]
We introduce a model-agnostic framework for gathering data to evaluate and improve contextual decision making.
Our method is used for the data-efficient evaluation of the regret of past treatment assignments.
arXiv Detail & Related papers (2022-07-12T01:20:11Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - A Novel RL-assisted Deep Learning Framework for Task-informative Signals
Selection and Classification for Spontaneous BCIs [2.299749220980997]
We formulate the problem of estimating and selecting task-relevant temporal signal segments from a single EEG trial.
We propose a novel reinforcement-learning mechanism that can be combined with the existing deep-learning based BCI methods.
arXiv Detail & Related papers (2020-07-01T00:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.