Deep Learning for Agile Effort Estimation Have We Solved the Problem
Yet?
- URL: http://arxiv.org/abs/2201.05401v1
- Date: Fri, 14 Jan 2022 11:38:51 GMT
- Title: Deep Learning for Agile Effort Estimation Have We Solved the Problem
Yet?
- Authors: Vali Tawosi, Rebecca Moussa, Federica Sarro
- Abstract summary: We perform a close replication and extension of a seminal work proposing the use of Deep Learning for agile effort estimation.
We benchmark Deep-SE against three baseline techniques and a previously proposed method to estimate agile software project development effort.
Using more data allows us to strengthen our confidence in the results and further mitigate the threat to the external validity of the study.
- Score: 7.808390209137859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade, several studies have proposed the use of automated
techniques to estimate the effort of agile software development. In this paper
we perform a close replication and extension of a seminal work proposing the
use of Deep Learning for agile effort estimation (namely Deep-SE), which has
set the state-of-the-art since. Specifically, we replicate three of the
original research questions aiming at investigating the effectiveness of
Deep-SE for both within-project and cross-project effort estimation. We
benchmark Deep-SE against three baseline techniques (i.e., Random, Mean and
Median effort prediction) and a previously proposed method to estimate agile
software project development effort (dubbed TF/IDF-SE), as done in the original
study. To this end, we use both the data from the original study and a new
larger dataset of 31,960 issues, which we mined from 29 open-source projects.
Using more data allows us to strengthen our confidence in the results and
further mitigate the threat to the external validity of the study. We also
extend the original study by investigating two additional research questions.
One evaluates the accuracy of Deep-SE when the training set is augmented with
issues from all other projects available in the repository at the time of
estimation, and the other examines whether an expensive pre-training step used
by the original Deep-SE, has any beneficial effect on its accuracy and
convergence speed. The results of our replication show that Deep-SE outperforms
the Median baseline estimator and TF/IDF-SE in only very few cases with
statistical significance (8/42 and 9/32 cases, respectively), thus confounding
previous findings on the efficacy of Deep-SE. The two additional RQs revealed
that neither augmenting the training set nor pre-training Deep-SE play a role
in improving its accuracy and convergence speed. ...
Related papers
- Test-time Offline Reinforcement Learning on Goal-related Experience [50.94457794664909]
Research in foundation models has shown that performance can be substantially improved through test-time training.<n>We propose a novel self-supervised data selection criterion, which selects transitions from an offline dataset according to their relevance to the current state.<n>Our goal-conditioned test-time training (GC-TTT) algorithm applies this routine in a receding-horizon fashion during evaluation, adapting the policy to the current trajectory as it is being rolled out.
arXiv Detail & Related papers (2025-07-24T21:11:39Z) - Probing Deep into Temporal Profile Makes the Infrared Small Target Detector Much Better [63.567886330598945]
Infrared small target (IRST) detection is challenging in simultaneously achieving precise, universal, robust and efficient performance.<n>Current learning-based methods attempt to leverage more" information from both the spatial and the short-term temporal domains.<n>We propose an efficient deep temporal probe network (DeepPro) that only performs calculations in the time dimension for IRST detection.
arXiv Detail & Related papers (2025-06-15T08:19:32Z) - A Comprehensive Survey on Evidential Deep Learning and Its Applications [64.83473301188138]
Evidential Deep Learning (EDL) provides reliable uncertainty estimation with minimal additional computation in a single forward pass.
We first delve into the theoretical foundation of EDL, the subjective logic theory, and discuss its distinctions from other uncertainty estimation frameworks.
We elaborate on its extensive applications across various machine learning paradigms and downstream tasks.
arXiv Detail & Related papers (2024-09-07T05:55:06Z) - Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models [73.79091519226026]
Uncertainty of Thoughts (UoT) is an algorithm to augment large language models with the ability to actively seek information by asking effective questions.
In experiments on medical diagnosis, troubleshooting, and the 20 Questions game, UoT achieves an average performance improvement of 38.1% in the rate of successful task completion.
arXiv Detail & Related papers (2024-02-05T18:28:44Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Measuring Improvement of F$_1$-Scores in Detection of Self-Admitted
Technical Debt [5.750379648650073]
We improve SATD detection with a novel approach that leverages the Bidirectional Representations from Transformers (BERT) architecture.
We find that our trained BERT model improves over the best performance of all previous methods in 19 of the 20 projects in cross-project scenarios.
Future research will look into ways to diversify SATD datasets in order to maximize the latent power in large BERT models.
arXiv Detail & Related papers (2023-03-16T19:47:38Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Toward Edge-Efficient Dense Predictions with Synergistic Multi-Task
Neural Architecture Search [22.62389136288258]
We propose a novel and scalable solution to address the challenges of developing efficient dense predictions on edge platforms.
Our first key insight is that MultiTask Learning (MTL) and hardware-aware Neural Architecture Search (NAS) can work in synergy to greatly benefit on-device Dense Predictions (DP)
We propose JAReD, an improved, easy-to-adopt Joint Absolute-Relative Depth loss, that reduces up to 88% of the undesired noise while simultaneously boosting accuracy.
arXiv Detail & Related papers (2022-10-04T04:49:08Z) - Simple Techniques Work Surprisingly Well for Neural Network Test
Prioritization and Active Learning (Replicability Study) [4.987581730476023]
Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently.
Feng et. al. propose DeepGini, a very fast and simple TIP, and show that it outperforms more elaborate techniques such as neuron- and surprise coverage.
arXiv Detail & Related papers (2022-05-02T05:47:34Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Fast Uncertainty Quantification for Deep Object Pose Estimation [91.09217713805337]
Deep learning-based object pose estimators are often unreliable and overconfident.
In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation.
arXiv Detail & Related papers (2020-11-16T06:51:55Z) - Confidence-Aware Learning for Deep Neural Networks [4.9812879456945]
We propose a method of training deep neural networks with a novel loss function, named Correctness Ranking Loss.
It regularizes class probabilities explicitly to be better confidence estimates in terms of ordinal ranking according to confidence.
It has almost the same computational costs for training as conventional deep classifiers and outputs reliable predictions by a single inference.
arXiv Detail & Related papers (2020-07-03T02:00:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.