A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation
- URL: http://arxiv.org/abs/2202.10985v9
- Date: Mon, 16 Sep 2024 08:03:37 GMT
- Title: A Laboratory Experiment on Using Different Financial-Incentivization Schemes in Software-Engineering Experimentation
- Authors: Dmitri Bershadskyy, Jacob Krüger, Gül Çalıklı, Siegmar Otto, Sarah Zabel, Jannik Greif, Robert Heyer,
- Abstract summary: We study how different financial incentivization schemes impact developers.
Our findings indicate that the different schemes can impact participants' performance in software-engineering experiments.
- Score: 1.7291678002736095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In software-engineering research, many empirical studies are conducted with open-source or industry developers. However, in contrast to other research communities like economics or psychology, only few experiments use financial incentives (i.e., paying money) as a strategy to motivate participants' behavior and reward their performance. The most recent version of the SIGSOFT Empirical Standards mentions payouts only for increasing participation in surveys, but not for mimicking real-world motivations and behavior in experiments. Within this article, we report a controlled experiment in which we tackled this gap by studying how different financial incentivization schemes impact developers. For this purpose, we first conducted a survey on financial incentives used in the real-world, based on which we designed three incentivization schemes: (1) a performance-dependent scheme that employees prefer, (2) a scheme that is performance-independent, and (3) a scheme that mimics open-source development. Then, using a between-subject experimental design, we explored how these three schemes impact participants' performance. Our findings indicate that the different schemes can impact participants' performance in software-engineering experiments. Due to the small sample sizes, our results are not statistically significant, but we can still observe clear tendencies. Our contributions help understand the impact of financial incentives on participants in experiments as well as real-world scenarios, guiding researchers in designing experiments and organizations in compensating developers.
Related papers
- Impact of Usability Mechanisms: A Family of Experiments on Efficiency, Effectiveness and User Satisfaction [0.5419296578793327]
We use a family of three experiments to increase the precision and generalization of the results in the baseline experiment.
We find that the Abort Operation and Preferences usability mechanisms appear to improve system usability a great deal with respect to efficiency, effectiveness and user satisfaction.
arXiv Detail & Related papers (2024-08-22T21:23:18Z) - On (Mis)perceptions of testing effectiveness: an empirical study [1.8026347864255505]
This research aims to discover how well the perceptions of the defect detection effectiveness of different techniques match their real effectiveness in the absence of prior experience.
In the original study, we conduct a controlled experiment with students applying two testing techniques and a code review technique.
At the end of the experiment, they take a survey to find out which technique they perceive to be most effective.
The results of the replicated study confirm the findings of the original study and suggest that participants' perceptions might be based not on their opinions about complexity or preferences for techniques but on how well they think that they have applied the techniques.
arXiv Detail & Related papers (2024-02-11T14:50:01Z) - Conducting A/B Experiments with a Scalable Architecture [0.6990493129893112]
A/B experiments are commonly used in research to compare the effects of changing one or more variables in two different experimental groups.
We propose a four-principle approach for developing a software architecture to support A/B experiments that is domain agnostic.
arXiv Detail & Related papers (2023-09-23T18:38:28Z) - PyExperimenter: Easily distribute experiments and track results [63.871474825689134]
PyExperimenter is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms.
It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those.
arXiv Detail & Related papers (2023-01-16T10:43:02Z) - Benchopt: Reproducible, efficient and collaborative optimization
benchmarks [67.29240500171532]
Benchopt is a framework to automate, reproduce and publish optimization benchmarks in machine learning.
Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments.
arXiv Detail & Related papers (2022-06-27T16:19:24Z) - Towards Continuous Compounding Effects and Agile Practices in
Educational Experimentation [2.7094829962573304]
This paper defines a framework for categorising different experimental processes.
Next generation of education technology successes will be heralded by embracing the full set of processes.
arXiv Detail & Related papers (2021-11-17T13:10:51Z) - The Efficiency Misnomer [50.69516433266469]
We discuss common cost indicators, their advantages and disadvantages, and how they can contradict each other.
We demonstrate how incomplete reporting of cost indicators can lead to partial conclusions and a blurred or incomplete picture of the practical considerations of different models.
arXiv Detail & Related papers (2021-10-25T12:48:07Z) - StudyMe: A New Mobile App for User-Centric N-of-1 Trials [68.8204255655161]
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals.
We present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me.
arXiv Detail & Related papers (2021-07-31T20:43:36Z) - Scaling up Search Engine Audits: Practical Insights for Algorithm
Auditing [68.8204255655161]
We set up experiments for eight search engines with hundreds of virtual agents placed in different regions.
We demonstrate the successful performance of our research infrastructure across multiple data collections.
We conclude that virtual agents are a promising venue for monitoring the performance of algorithms across long periods of time.
arXiv Detail & Related papers (2021-06-10T15:49:58Z) - On Course, But Not There Yet: Enterprise Architecture Conformance and
Benefits in Systems Development [1.5071503188049546]
Various claims have been made regarding the benefits that Enterprise Architecture (EA) delivers for both individual systems development projects and the organization as a whole.
This paper presents the statistical findings of a survey study (n=293) carried out to empirically test these claims.
arXiv Detail & Related papers (2020-08-23T14:00:55Z) - Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning [100.73223416589596]
We propose a cost-sensitive portfolio selection method with deep reinforcement learning.
Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations.
A new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning.
arXiv Detail & Related papers (2020-03-06T06:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.