Effect of Requirements Analyst Experience on Elicitation Effectiveness: A Family of Empirical Studies
- URL: http://arxiv.org/abs/2408.12538v1
- Date: Thu, 22 Aug 2024 16:48:04 GMT
- Title: Effect of Requirements Analyst Experience on Elicitation Effectiveness: A Family of Empirical Studies
- Authors: Alejandrina M. Aranda, Oscar Dieste, Jose I. Panach, Natalia Juristo,
- Abstract summary: The purpose of this study was to determine whether experience influences requirements analyst performance.
In unfamiliar domains, interview, requirements, development, and professional experience does not influence analyst effectiveness.
Interview experience has a strong positive effect, whereas professional experience has a moderate negative effect.
- Score: 40.186975773919706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context. Nowadays there is a great deal of uncertainty surrounding the effects of experience on Requirements Engineering (RE). There is a widespread idea that experience improves analyst performance. However, there are empirical studies that demonstrate the exact opposite. Aim. Determine whether experience influences requirements analyst performance. Method. Quasi-experiments run with students and professionals. The experimental task was to elicit requirements using the open interview technique immediately followed by the consolidation of the elicited information in domains with which the analysts were and were not familiar. Results. In unfamiliar domains, interview, requirements, development, and professional experience does not influence analyst effectiveness. In familiar domains, effectiveness varies depending on the type of experience. Interview experience has a strong positive effect, whereas professional experience has a moderate negative effect. Requirements experience appears to have a moderately positive effect; however, the statistical power of the analysis is insufficient to be able to confirm this point. Development experience has no effect either way. Conclusion. Experience effects analyst effectiveness differently depending on the problem domain type (familiar, unfamiliar). Generally, experience does not account for all the observed variability, which means there are other influential factors.
Related papers
- Impact of Usability Mechanisms: A Family of Experiments on Efficiency, Effectiveness and User Satisfaction [0.5419296578793327]
We use a family of three experiments to increase the precision and generalization of the results in the baseline experiment.
We find that the Abort Operation and Preferences usability mechanisms appear to improve system usability a great deal with respect to efficiency, effectiveness and user satisfaction.
arXiv Detail & Related papers (2024-08-22T21:23:18Z) - Contexts Matter: An Empirical Study on Contextual Influence in Fairness Testing for Deep Learning Systems [3.077531983369872]
We aim to understand how varying contexts affect fairness testing outcomes.
Our results show that different context types and settings generally lead to a significant impact on the testing.
arXiv Detail & Related papers (2024-08-12T12:36:06Z) - Defining Expertise: Applications to Treatment Effect Estimation [58.7977683502207]
We argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation.
We define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset.
arXiv Detail & Related papers (2024-03-01T17:30:49Z) - Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations [17.824339932321788]
We identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results.
Our findings indicate that all 10 hazards have the potential to invalidate experimental findings.
We propose a point set of 10 good empirical practices that has the potential to mitigate the impact of the hazards.
arXiv Detail & Related papers (2023-09-11T11:05:34Z) - A Pretrainer's Guide to Training Data: Measuring the Effects of Data
Age, Domain Coverage, Quality, & Toxicity [84.6421260559093]
This study is the largest set of experiments to validate, quantify, and expose undocumented intuitions about text pretraining.
Our findings indicate there does not exist a one-size-fits-all solution to filtering training data.
arXiv Detail & Related papers (2023-05-22T15:57:53Z) - Which Experiences Are Influential for Your Agent? Policy Iteration with
Turn-over Dropout [15.856188608650228]
We present PI+ToD as a policy iteration that efficiently estimates the influence of experiences by utilizing turn-over dropout.
We demonstrate the efficiency of PI+ToD with experiments in MuJoCo environments.
arXiv Detail & Related papers (2023-01-26T15:13:04Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - Black Magic in Deep Learning: How Human Skill Impacts Network Training [24.802914836352738]
We present an initial study based on 31 participants with different levels of experience.
The results show a strong positive correlation between the participant's experience and the final performance.
arXiv Detail & Related papers (2020-08-13T15:56:14Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.