A Second Look at the Impact of Passive Voice Requirements on Domain
Modeling: Bayesian Reanalysis of an Experiment
- URL: http://arxiv.org/abs/2402.10800v1
- Date: Fri, 16 Feb 2024 16:24:00 GMT
- Title: A Second Look at the Impact of Passive Voice Requirements on Domain
Modeling: Bayesian Reanalysis of an Experiment
- Authors: Julian Frattini, Davide Fucci, Richard Torkar, Daniel Mendez
- Abstract summary: We reanalyze the only known controlled experiment investigating the impact of passive voice on the subsequent activity of domain modeling.
Our results reveal that the effects observed by the original authors turned out to be much less significant than previously assumed.
- Score: 4.649794383775257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quality of requirements specifications may impact subsequent, dependent
software engineering (SE) activities. However, empirical evidence of this
impact remains scarce and too often superficial as studies abstract from the
phenomena under investigation too much. Two of these abstractions are caused by
the lack of frameworks for causal inference and frequentist methods which
reduce complex data to binary results. In this study, we aim to demonstrate (1)
the use of a causal framework and (2) contrast frequentist methods with more
sophisticated Bayesian statistics for causal inference. To this end, we
reanalyze the only known controlled experiment investigating the impact of
passive voice on the subsequent activity of domain modeling. We follow a
framework for statistical causal inference and employ Bayesian data analysis
methods to re-investigate the hypotheses of the original study. Our results
reveal that the effects observed by the original authors turned out to be much
less significant than previously assumed. This study supports the recent call
to action in SE research to adopt Bayesian data analysis, including causal
frameworks and Bayesian statistics, for more sophisticated causal inference.
Related papers
- Causal Fine-Tuning and Effect Calibration of Non-Causal Predictive Models [1.3124513975412255]
This paper proposes techniques to enhance the performance of non-causal models for causal inference using data from randomized experiments.
In domains like advertising, customer retention, and precision medicine, non-causal models that predict outcomes under no intervention are often used to score individuals and rank them according to the expected effectiveness of an intervention.
arXiv Detail & Related papers (2024-06-13T20:18:16Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - An evaluation framework for comparing causal inference models [3.1372269816123994]
We use the proposed evaluation methodology to compare several state-of-the-art causal effect estimation models.
The main motivation behind this approach is the elimination of the influence of a small number of instances or simulation on the benchmarking process.
arXiv Detail & Related papers (2022-08-31T21:04:20Z) - Valid Inference After Causal Discovery [73.87055989355737]
We develop tools for valid post-causal-discovery inference.
We show that a naive combination of causal discovery and subsequent inference algorithms leads to highly inflated miscoverage rates.
arXiv Detail & Related papers (2022-08-11T17:40:45Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Causal Identification with Additive Noise Models: Quantifying the Effect
of Noise [5.037636944933989]
This work investigates the impact of different noise levels on the ability of Additive Noise Models to identify the direction of the causal relationship.
We use an exhaustive range of models where the level of additive noise gradually changes from 1% to 10000% of the causes' noise level.
The results of the experiments show that ANMs methods can fail to capture the true causal direction for some levels of noise.
arXiv Detail & Related papers (2021-10-15T13:28:33Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.