Utilising Deep Learning to Elicit Expert Uncertainty
- URL: http://arxiv.org/abs/2501.11813v1
- Date: Tue, 21 Jan 2025 01:36:12 GMT
- Title: Utilising Deep Learning to Elicit Expert Uncertainty
- Authors: Julia R. Falconer, Eibe Frank, Devon L. L. Polaschek, Chaitanya Joshi,
- Abstract summary: We show how analysts can adopt a deep learning approach to utilize the method proposed in [14 ] with the actual information experts use.
We provide an overview of deep learning models that can effectively model expert decision-making to elicit distributions that capture expert uncertainty.
- Score: 2.9686400658670578
- License:
- Abstract: Recent work [ 14 ] has introduced a method for prior elicitation that utilizes records of expert decisions to infer a prior distribution. While this method provides a promising approach to eliciting expert uncertainty, it has only been demonstrated using tabular data, which may not entirely represent the information used by experts to make decisions. In this paper, we demonstrate how analysts can adopt a deep learning approach to utilize the method proposed in [14 ] with the actual information experts use. We provide an overview of deep learning models that can effectively model expert decision-making to elicit distributions that capture expert uncertainty and present an example examining the risk of colon cancer to show in detail how these models can be used.
Related papers
- Learning to Defer for Causal Discovery with Imperfect Experts [59.071731337922664]
We propose L2D-CD, a method for gauging the correctness of expert recommendations and optimally combining them with data-driven causal discovery results.
We evaluate L2D-CD on the canonical T"ubingen pairs dataset and demonstrate its superior performance compared to both the causal discovery method and the expert used in isolation.
arXiv Detail & Related papers (2025-02-18T18:55:53Z) - Expert-Agnostic Learning to Defer [4.171294900540735]
We introduce EA-L2D: Expert-Agnostic Learning to Defer, a novel L2D framework that leverages a Bayesian approach to model expert behaviour.
We observe performance gains over the next state-of-the-art of 1-16% for seen experts and 4-28% for unseen experts in settings with high expert diversity.
arXiv Detail & Related papers (2025-02-14T19:59:25Z) - On the Biased Assessment of Expert Finding Systems [11.083396379885478]
In large organisations, identifying experts on a given topic is crucial in leveraging the internal knowledge spread across teams and departments.
This case study provides an analysis of how these recommendations can impact the evaluation of expert finding systems.
We show that system-validated annotations lead to overestimated performance of traditional term-based retrieval models.
We also augment knowledge areas with synonyms to uncover a strong bias towards literal mentions of their constituent words.
arXiv Detail & Related papers (2024-10-07T13:19:08Z) - Offline Imitation Learning with Model-based Reverse Augmentation [48.64791438847236]
We propose a novel model-based framework, called offline Imitation Learning with Self-paced Reverse Augmentation.
Specifically, we build a reverse dynamic model from the offline demonstrations, which can efficiently generate trajectories leading to the expert-observed states.
We use the subsequent reinforcement learning method to learn from the augmented trajectories and transit from expert-unobserved states to expert-observed states.
arXiv Detail & Related papers (2024-06-18T12:27:02Z) - Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity [22.0059059325909]
We study the problem of online sequential decision-making given auxiliary demonstrations from experts who made their decisions based on unobserved contextual information.
This setting arises in many application domains, such as self-driving cars, healthcare, and finance.
We propose the Experts-as-Priors algorithm (ExPerior) to establish an informative prior distribution over the learner's decision-making problem.
arXiv Detail & Related papers (2024-04-10T18:00:17Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Online Learning with Uncertain Feedback Graphs [12.805267089186533]
The relationship among experts can be captured by a feedback graph, which can be used to assist the learner's decision making.
In practice, the nominal feedback graph often entails uncertainties, which renders it impossible to reveal the actual relationship among experts.
The present work studies various cases of potential uncertainties, and develops novel online learning algorithms to deal with them.
arXiv Detail & Related papers (2021-06-15T21:21:30Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Consistent Estimators for Learning to Defer to an Expert [5.076419064097734]
We show how to learn predictors that can either predict or choose to defer the decision to a downstream expert.
We show the effectiveness of our approach on a variety of experimental tasks.
arXiv Detail & Related papers (2020-06-02T18:21:38Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.