Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness
- URL: http://arxiv.org/abs/2202.08821v2
- Date: Wed, 1 Jun 2022 04:16:33 GMT
- Title: Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness
- Authors: Kate Donahue, Alexandra Chouldechova, Krishnaram Kenthapadi
- Abstract summary: We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
- Score: 92.26039686430204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of machine learning research focuses on predictive accuracy: given a
task, create a machine learning model (or algorithm) that maximizes accuracy.
In many settings, however, the final prediction or decision of a system is
under the control of a human, who uses an algorithm's output along with their
own personal expertise in order to produce a combined prediction. One ultimate
goal of such collaborative systems is "complementarity": that is, to produce
lower loss (equivalently, greater payoff or utility) than either the human or
algorithm alone. However, experimental results have shown that even in
carefully-designed systems, complementary performance can be elusive. Our work
provides three key contributions. First, we provide a theoretical framework for
modeling simple human-algorithm systems and demonstrate that multiple prior
analyses can be expressed within it. Next, we use this model to prove
conditions where complementarity is impossible, and give constructive examples
of where complementarity is achievable. Finally, we discuss the implications of
our findings, especially with respect to the fairness of a classifier. In sum,
these results deepen our understanding of key factors influencing the combined
performance of human-algorithm systems, giving insight into how algorithmic
tools can best be designed for collaborative environments.
Related papers
- A Human-Centered Approach for Improving Supervised Learning [0.44378250612683995]
This paper shows how we can strike a balance between performance, time, and resource constraints.
Another goal of this research is to make Ensembles more explainable and intelligible using the Human-Centered approach.
arXiv Detail & Related papers (2024-10-14T10:27:14Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Human Expertise in Algorithmic Prediction [16.104330706951004]
We introduce a novel framework for incorporating human expertise into algorithmic predictions.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to predictive algorithms.
arXiv Detail & Related papers (2024-02-01T17:23:54Z) - Incentive Mechanism Design for Distributed Ensemble Learning [15.687660150828906]
Distributed ensemble learning (DEL) involves training multiple models at distributed learners, and then combining their predictions to improve performance.
We present a first study on the incentive mechanism design for DEL.
Our proposed mechanism specifies both the amount of training data and reward for learners with heterogeneous and communication costs.
arXiv Detail & Related papers (2023-10-13T00:34:12Z) - Predictive Coding beyond Correlations [59.47245250412873]
We show how one of such algorithms, called predictive coding, is able to perform causal inference tasks.
First, we show how a simple change in the inference process of predictive coding enables to compute interventions without the need to mutilate or redefine a causal graph.
arXiv Detail & Related papers (2023-06-27T13:57:16Z) - Algorithmic Collective Action in Machine Learning [35.91866986642348]
We study algorithmic collective action on digital platforms that deploy machine learning algorithms.
We propose a simple theoretical model of a collective interacting with a firm's learning algorithm.
We conduct systematic experiments on a skill classification task involving tens of thousands of resumes from a gig platform for freelancers.
arXiv Detail & Related papers (2023-02-08T18:55:49Z) - Sample Efficient Learning of Predictors that Complement Humans [5.830619388189559]
We provide the first theoretical analysis of the benefit of learning complementary predictors in expert deferral.
We design active learning schemes that require minimal amount of data of human expert predictions.
arXiv Detail & Related papers (2022-07-19T23:19:25Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.