Ensemble Regression Models for Software Development Effort Estimation: A
Comparative Study
- URL: http://arxiv.org/abs/2007.01719v1
- Date: Fri, 3 Jul 2020 14:40:41 GMT
- Title: Ensemble Regression Models for Software Development Effort Estimation: A
Comparative Study
- Authors: Halcyon D. P. Carvalho, Mar\'ilia N. C. A. Lima, Wylliams B. Santos
and Roberta A. de A.Fagunde
- Abstract summary: This study determines which technique has better effort prediction accuracy and propose combined techniques that could provide better estimates.
The results have indicated that the proposed ensemble models, besides delivering high efficiency in contrast to its counterparts, and produces the best responses for software project effort estimation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As demand for computer software continually increases, software scope and
complexity become higher than ever. The software industry is in real need of
accurate estimates of the project under development. Software development
effort estimation is one of the main processes in software project management.
However, overestimation and underestimation may cause the software industry
loses. This study determines which technique has better effort prediction
accuracy and propose combined techniques that could provide better estimates.
Eight different ensemble models to estimate effort with Ensemble Models were
compared with each other base on the predictive accuracy on the Mean Absolute
Residual (MAR) criterion and statistical tests. The results have indicated that
the proposed ensemble models, besides delivering high efficiency in contrast to
its counterparts, and produces the best responses for software project effort
estimation. Therefore, the proposed ensemble models in this study will help the
project managers working with development quality software.
Related papers
- A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - SOEN-101: Code Generation by Emulating Software Process Models Using Large Language Model Agents [50.82665351100067]
FlowGen is a code generation framework that emulates software process models based on multiple Large Language Model (LLM) agents.
We evaluate FlowGenScrum on four benchmarks: HumanEval, HumanEval-ET, MBPP, and MBPP-ET.
arXiv Detail & Related papers (2024-03-23T14:04:48Z) - Leveraging AI for Enhanced Software Effort Estimation: A Comprehensive
Study and Framework Proposal [2.8643479919807433]
The study aims to improve accuracy and reliability by overcoming the limitations of traditional methods.
The proposed AI-based framework holds the potential to enhance project planning and resource allocation.
arXiv Detail & Related papers (2024-02-08T08:25:41Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Recent Advances in Software Effort Estimation using Machine Learning [0.0]
We review the most recent machine learning approaches used to estimate software development efforts for both, non-agile and agile methodologies.
We analyze the benefits of adopting an agile methodology in terms of effort estimation possibilities.
We conclude with an analysis of current and future trends, regarding software effort estimation through data-driven predictive models.
arXiv Detail & Related papers (2023-03-06T20:25:16Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Software Effort Estimation using parameter tuned Models [1.9336815376402716]
The imprecision of the estimation is the reason for Project Failure.
The greatest pitfall of the software industry was the fast-changing nature of software development.
We need the development of useful models that accurately predict the cost of developing a software product.
arXiv Detail & Related papers (2020-08-25T15:18:59Z) - Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation [80.49040344355431]
This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
arXiv Detail & Related papers (2020-06-15T23:29:04Z) - Software Defect Prediction Based On Deep Learning Models: Performance
Study [0.5735035463793008]
Two deep learning models, Stack Sparse Auto-Encoder (SSAE) and Deep Belief Network (DBN) are deployed to classify NASA datasets.
According to the conducted experiment, the accuracy for the datasets with sufficient samples is enhanced.
arXiv Detail & Related papers (2020-04-02T06:02:14Z) - Software Effort Estimation using Neuro Fuzzy Inference System: Past and
Present [1.7767466724342065]
Inaccurate software estimation may lead to delay in project, over-budget or cancellation of the project.
In this paper, we are analyzing the new approach for estimation i.e. Neuro Fuzzy Inference System (NFIS)
arXiv Detail & Related papers (2019-12-26T12:55:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.