A Causal Framework for Evaluating Deferring Systems
- URL: http://arxiv.org/abs/2405.18902v1
- Date: Wed, 29 May 2024 09:03:44 GMT
- Title: A Causal Framework for Evaluating Deferring Systems
- Authors: Filippo Palomba, Andrea Pugnana, José Manuel Alvarez, Salvatore Ruggieri,
- Abstract summary: We evaluate the impact of a deferring strategy on system accuracy through a causal lens.
This allows us to identify the causal impact of the deferring strategy on predictive accuracy.
We empirically evaluate our approach on synthetic and real datasets for seven deferring systems from the literature.
- Score: 13.90573504537727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deferring systems extend supervised Machine Learning (ML) models with the possibility to defer predictions to human experts. However, evaluating the impact of a deferring strategy on system accuracy is still an overlooked area. This paper fills this gap by evaluating deferring systems through a causal lens. We link the potential outcomes framework for causal inference with deferring systems. This allows us to identify the causal impact of the deferring strategy on predictive accuracy. We distinguish two scenarios. In the first one, we can access both the human and the ML model predictions for the deferred instances. In such a case, we can identify the individual causal effects for deferred instances and aggregates of them. In the second scenario, only human predictions are available for the deferred instances. In this case, we can resort to regression discontinuity design to estimate a local causal effect. We empirically evaluate our approach on synthetic and real datasets for seven deferring systems from the literature.
Related papers
- Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets [14.478233576808876]
In decision support systems based on prediction sets, there is a trade-off between accuracy and causalfactual harm.
We show that under a natural, unverifiable, monotonicity assumption, we can estimate how frequently a system may cause harm using predictions made by humans on their own.
We also show that, under a weaker assumption, which can be verified, we can bound how frequently a system may cause harm again using only predictions made by humans on their own.
arXiv Detail & Related papers (2024-06-10T18:00:00Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Causal Scoring: A Framework for Effect Estimation, Effect Ordering, and
Effect Classification [11.460911023224337]
Causal scoring entails the estimation of scores that support decision making by providing insights into causal effects.
We present three valuable causal interpretations of these scores: effect estimation (EE), effect ordering (EO), and effect classification (EC)
arXiv Detail & Related papers (2022-06-25T02:15:22Z) - Undersmoothing Causal Estimators with Generative Trees [0.0]
Inferring individualised treatment effects from observational data can unlock the potential for targeted interventions.
It is, however, hard to infer these effects from observational data.
In this paper, we explore a novel generative tree based approach that tackles model misspecification directly.
arXiv Detail & Related papers (2022-03-16T11:59:38Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Causal Knowledge Guided Societal Event Forecasting [24.437437565689393]
We introduce a deep learning framework that integrates causal effect estimation into event forecasting.
Two robust learning modules, including a feature reweighting module and an approximate loss, are introduced to enable prior knowledge injection.
arXiv Detail & Related papers (2021-12-10T17:41:02Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.