Exploring the Garden of Forking Paths in Empirical Software Engineering Research: A Multiverse Analysis
- URL: http://arxiv.org/abs/2512.08910v1
- Date: Tue, 09 Dec 2025 18:47:00 GMT
- Title: Exploring the Garden of Forking Paths in Empirical Software Engineering Research: A Multiverse Analysis
- Authors: Nathan Cassee, Robert Feldt,
- Abstract summary: We conduct a so-called multiverse analysis on a published empirical SE paper.<n>We identify nine pivotal analytical decisions with at least one equally defensible alternative.<n>The overwhelming majority produced qualitatively different, and sometimes even opposite, findings.
- Score: 3.6324565773746147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In empirical software engineering (SE) research, researchers have considerable freedom to decide how to process data, what operationalizations to use, and which statistical model to fit. Gelman and Loken refer to this freedom as leading to a "garden of forking paths". Although this freedom is often seen as an advantage, it also poses a threat to robustness and replicability: variations in analytical decisions, even when justifiable, can lead to divergent conclusions. To better understand this risk, we conducted a so-called multiverse analysis on a published empirical SE paper. The paper we picked is a Mining Software Repositories study, as MSR studies commonly use non-trivial statistical models to analyze post-hoc, observational data. In the study, we identified nine pivotal analytical decisions-each with at least one equally defensible alternative and systematically reran all the 3,072 resulting analysis pipelines on the original dataset. Interestingly, only 6 of these universes (<0.2%) reproduced the published results; the overwhelming majority produced qualitatively different, and sometimes even opposite, findings. This case study of a data analytical method commonly applied to empirical software engineering data reveals how methodological choices can exert a more profound influence on outcomes than is often acknowledged. We therefore advocate that SE researchers complement standard reporting with robustness checks across plausible analysis variants or, at least, explicitly justify each analytical decision. We propose a structured classification model to help classify and improve justification for methodological choices. Secondly, we show how the multiverse analysis is a practical tool in the methodological arsenal of SE researchers, one that can help produce more reliable, reproducible science.
Related papers
- Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse [22.927943525772857]
We show that fully autonomous AI analysts built on large language models (LLMs) can reproduce a similar structured analytic diversity cheaply and at scale.<n>We show that the effects are emphsteerable: reassigning the analyst persona or LLM shifts the distribution of outcomes even after excluding methodologically deficient runs.
arXiv Detail & Related papers (2026-02-21T04:10:21Z) - Oops!... I did it again. Conclusion (In-)Stability in Quantitative Empirical Software Engineering: A Large-Scale Analysis [5.94721915761333]
Mining software repositories is a popular means to gain insights into a software project's evolution.<n>This study investigates some threats to validity in complex tool pipelines for evolutionary software analyses.
arXiv Detail & Related papers (2025-10-08T10:11:39Z) - Data Fusion for Partial Identification of Causal Effects [62.56890808004615]
We propose a novel partial identification framework that enables researchers to answer key questions.<n>Is the causal effect positive or negative? and How severe must assumption violations be to overturn this conclusion?<n>We apply our framework to the Project STAR study, which investigates the effect of classroom size on students' third-grade standardized test performance.
arXiv Detail & Related papers (2025-05-30T07:13:01Z) - Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.<n>The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.<n>The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Mitigating Omitted Variable Bias in Empirical Software Engineering [2.9506547907696006]
omitted variable bias occurs when a statistical model leaves out variables that are relevant determinants of the effects under study.<n>Omitted variable bias presents a significant threat to the validity of empirical research.<n>This paper demonstrates a sequence of analysis steps that inform the design and execution of any empirical study in software engineering.
arXiv Detail & Related papers (2025-01-28T15:43:46Z) - Detecting and Identifying Selection Structure in Sequential Data [53.24493902162797]
We argue that the selective inclusion of data points based on latent objectives is common in practical situations, such as music sequences.
We show that selection structure is identifiable without any parametric assumptions or interventional experiments.
We also propose a provably correct algorithm to detect and identify selection structures as well as other types of dependencies.
arXiv Detail & Related papers (2024-06-29T20:56:34Z) - A Multiple kernel testing procedure for non-proportional hazards in
factorial designs [4.358626952482687]
We propose a Multiple kernel testing procedure to infer survival data when several factors are of interest simultaneously.
Our method is able to deal with complex data and can be seen as an alternative to the omnipresent Cox model when assumptions such as proportionality cannot be justified.
arXiv Detail & Related papers (2022-06-15T01:53:49Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Beyond Accuracy: ROI-driven Data Analytics of Empirical Data [3.5751623095926806]
It is crucial to consider Return-on-Investment when performing Data Analytics.
This vision paper demonstrates that it is crucial to consider Return-on-Investment when performing Data Analytics.
arXiv Detail & Related papers (2020-09-14T14:49:37Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.