Defining Expertise: Applications to Treatment Effect Estimation
- URL: http://arxiv.org/abs/2403.00694v1
- Date: Fri, 1 Mar 2024 17:30:49 GMT
- Title: Defining Expertise: Applications to Treatment Effect Estimation
- Authors: Alihan H\"uy\"uk, Qiyao Wei, Alicia Curth, Mihaela van der Schaar
- Abstract summary: We argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation.
We define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset.
- Score: 58.7977683502207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision-makers are often experts of their domain and take actions based on
their domain knowledge. Doctors, for instance, may prescribe treatments by
predicting the likely outcome of each available treatment. Actions of an expert
thus naturally encode part of their domain knowledge, and can help make
inferences within the same domain: Knowing doctors try to prescribe the best
treatment for their patients, we can tell treatments prescribed more frequently
are likely to be more effective. Yet in machine learning, the fact that most
decision-makers are experts is often overlooked, and "expertise" is seldom
leveraged as an inductive bias. This is especially true for the literature on
treatment effect estimation, where often the only assumption made about actions
is that of overlap. In this paper, we argue that expertise - particularly the
type of expertise the decision-makers of a domain are likely to have - can be
informative in designing and selecting methods for treatment effect estimation.
We formally define two types of expertise, predictive and prognostic, and
demonstrate empirically that: (i) the prominent type of expertise in a domain
significantly influences the performance of different methods in treatment
effect estimation, and (ii) it is possible to predict the type of expertise
present in a dataset, which can provide a quantitative basis for model
selection.
Related papers
- The Blessings of Multiple Treatments and Outcomes in Treatment Effect
Estimation [53.81860494566915]
Existing studies leveraged proxy variables or multiple treatments to adjust for confounding bias.
In many real-world scenarios, there is greater interest in studying the effects on multiple outcomes.
We show that parallel studies of multiple outcomes involved in this setting can assist each other in causal identification.
arXiv Detail & Related papers (2023-09-29T14:33:48Z) - Auditing for Human Expertise [13.740812888680614]
We develop a statistical framework under which we can pose this question as a natural hypothesis test.
We propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest.
A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data.
arXiv Detail & Related papers (2023-06-02T16:15:24Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Disentangled Counterfactual Recurrent Networks for Treatment Effect
Inference over Time [71.30985926640659]
We introduce the Disentangled Counterfactual Recurrent Network (DCRN), a sequence-to-sequence architecture that estimates treatment outcomes over time.
With an architecture that is completely inspired by the causal structure of treatment influence over time, we advance forecast accuracy and disease understanding.
We demonstrate that DCRN outperforms current state-of-the-art methods in forecasting treatment responses, on both real and simulated data.
arXiv Detail & Related papers (2021-12-07T16:40:28Z) - A Machine Learning Framework Towards Transparency in Experts' Decision
Quality [0.0]
In many important settings, transparency in experts' decision quality is rarely possible because ground truth data for evaluating the experts' decisions is costly and available only for a limited set of decisions.
We first formulate the problem of estimating experts' decision accuracy in this setting and then develop a machine-learning-based framework to address it.
Our method effectively leverages both abundant historical data on workers' past decisions, and scarce decision instances with ground truth information.
arXiv Detail & Related papers (2021-10-21T18:50:40Z) - Improving the compromise between accuracy, interpretability and
personalization of rule-based machine learning in medical problems [0.08594140167290096]
We introduce a new component to predict if a given rule will be correct or not for a particular patient, which introduces personalization into the procedure.
The validation results using three public clinical datasets show that it also allows to increase the predictive performance of the selected set of rules.
arXiv Detail & Related papers (2021-06-15T01:19:04Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.