Towards Unbiased and Accurate Deferral to Multiple Experts
- URL: http://arxiv.org/abs/2102.13004v1
- Date: Thu, 25 Feb 2021 17:08:39 GMT
- Title: Towards Unbiased and Accurate Deferral to Multiple Experts
- Authors: Vijay Keswani, Matthew Lease, Krishnaram Kenthapadi
- Abstract summary: We propose a framework that simultaneously learns a classifier and a deferral system, with the deferral system choosing to defer to one or more human experts.
We test our framework on a synthetic dataset and a content moderation dataset with biased synthetic experts, and show that it significantly improves the accuracy and fairness of the final predictions.
- Score: 19.24068936057053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are often implemented in cohort with humans in the
pipeline, with the model having an option to defer to a domain expert in cases
where it has low confidence in its inference. Our goal is to design mechanisms
for ensuring accuracy and fairness in such prediction systems that combine
machine learning model inferences and domain expert predictions. Prior work on
"deferral systems" in classification settings has focused on the setting of a
pipeline with a single expert and aimed to accommodate the inaccuracies and
biases of this expert to simultaneously learn an inference model and a deferral
system. Our work extends this framework to settings where multiple experts are
available, with each expert having their own domain of expertise and biases. We
propose a framework that simultaneously learns a classifier and a deferral
system, with the deferral system choosing to defer to one or more human experts
in cases of input where the classifier has low confidence. We test our
framework on a synthetic dataset and a content moderation dataset with biased
synthetic experts, and show that it significantly improves the accuracy and
fairness of the final predictions, compared to the baselines. We also collect
crowdsourced labels for the content moderation task to construct a real-world
dataset for the evaluation of hybrid machine-human frameworks and show that our
proposed learning framework outperforms baselines on this real-world dataset as
well.
Related papers
- Mixture of Efficient Diffusion Experts Through Automatic Interval and Sub-Network Selection [63.96018203905272]
We propose to reduce the sampling cost by pruning a pretrained diffusion model into a mixture of efficient experts.
We demonstrate the effectiveness of our method, DiffPruning, across several datasets.
arXiv Detail & Related papers (2024-09-23T21:27:26Z) - Diversified Ensembling: An Experiment in Crowdsourced Machine Learning [18.192916651221882]
In arXiv:2201.10408, the authors developed an alternative crowdsourcing framework in the context of fair machine learning.
We present the first medium-scale experimental evaluation of this framework, with 46 participating teams attempting to generate models.
arXiv Detail & Related papers (2024-02-16T16:20:43Z) - AMEND: A Mixture of Experts Framework for Long-tailed Trajectory Prediction [6.724750970258851]
We propose a modular model-agnostic framework for trajectory prediction.
Each expert is trained with a specialized skill with respect to a particular part of the data.
To produce predictions, we utilise a router network that selects the best expert by generating relative confidence scores.
arXiv Detail & Related papers (2024-02-13T02:43:41Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Designing Decision Support Systems Using Counterfactual Prediction Sets [15.121082690769525]
Decision support systems for classification tasks are predominantly designed to predict the value of the ground truth labels.
This paper revisits the design of this type of systems from the perspective of online learning.
We develop a methodology that does not require, nor assumes, an expert model.
arXiv Detail & Related papers (2023-06-06T18:00:09Z) - Incorporating Experts' Judgment into Machine Learning Models [2.5363839239628843]
In some cases, domain experts might have a judgment about the expected outcome that might conflict with the prediction of machine learning models.
We present a novel framework that aims at leveraging experts' judgment to mitigate the conflict.
arXiv Detail & Related papers (2023-04-24T07:32:49Z) - Investigating Bias with a Synthetic Data Generator: Empirical Evidence
and Philosophical Interpretation [66.64736150040093]
Machine learning applications are becoming increasingly pervasive in our society.
Risk is that they will systematically spread the bias embedded in data.
We propose to analyze biases by introducing a framework for generating synthetic data with specific types of bias and their combinations.
arXiv Detail & Related papers (2022-09-13T11:18:50Z) - SuperCone: Modeling Heterogeneous Experts with Concept Meta-learning for
Unified Predictive Segments System [8.917697023052257]
We present SuperCone, our unified predicative segments system.
It builds on top of a flat concept representation that summarizes each user's heterogeneous digital footprints.
It can outperform state-of-the-art recommendation and ranking algorithms on a wide range of predicative segment tasks.
arXiv Detail & Related papers (2022-03-09T04:11:39Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.