A Provably Improved Algorithm for Crowdsourcing with Hard and Easy Tasks
- URL: http://arxiv.org/abs/2302.07393v1
- Date: Tue, 14 Feb 2023 23:30:39 GMT
- Title: A Provably Improved Algorithm for Crowdsourcing with Hard and Easy Tasks
- Authors: Seo Taek Kong, Saptarshi Mandal, Dimitrios Katselis, R. Srikant
- Abstract summary: We are motivated by crowdsourcing applications where each worker can exhibit two levels of accuracy depending on a task's type.
Applying algorithms designed for the traditional Dawid-Skene model to such a scenario results in performance which is limited by the hard tasks.
We theoretically prove that when crowdsourced data contain tasks with varying levels of difficulty, our algorithm infers the true labels with higher accuracy than any Dawid-Skene algorithm.
- Score: 7.822210329345705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Crowdsourcing is a popular method used to estimate ground-truth labels by
collecting noisy labels from workers. In this work, we are motivated by
crowdsourcing applications where each worker can exhibit two levels of accuracy
depending on a task's type. Applying algorithms designed for the traditional
Dawid-Skene model to such a scenario results in performance which is limited by
the hard tasks. Therefore, we first extend the model to allow worker accuracy
to vary depending on a task's unknown type. Then we propose a spectral method
to partition tasks by type. After separating tasks by type, any Dawid-Skene
algorithm (i.e., any algorithm designed for the Dawid-Skene model) can be
applied independently to each type to infer the truth values. We theoretically
prove that when crowdsourced data contain tasks with varying levels of
difficulty, our algorithm infers the true labels with higher accuracy than any
Dawid-Skene algorithm. Experiments show that our method is effective in
practical applications.
Related papers
- A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment [76.04306818209753]
We introduce a substantial crowdsourcing annotation dataset collected from a real-world crowdsourcing platform.
This dataset comprises approximately two thousand workers, one million tasks, and six million annotations.
We evaluate the effectiveness of several representative truth inference algorithms on this dataset.
arXiv Detail & Related papers (2024-03-10T16:00:41Z) - Robust Assignment of Labels for Active Learning with Sparse and Noisy
Annotations [0.17188280334580192]
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe.
Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice.
We propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space.
arXiv Detail & Related papers (2023-07-25T19:40:41Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Recovering Top-Two Answers and Confusion Probability in Multi-Choice
Crowdsourcing [10.508187462682308]
We consider crowdsourcing tasks with the goal of recovering not only the ground truth, but also the most confusing answer and the confusion probability.
We propose a model in which there are the top two plausible answers for each task, distinguished from the rest of the choices.
Under this model, we propose a two-stage inference algorithm to infer both the top two answers and the confusion probability.
arXiv Detail & Related papers (2022-12-29T09:46:39Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - A Worker-Task Specialization Model for Crowdsourcing: Efficient
Inference and Fundamental Limits [20.955889997204693]
Crowdsourcing system has emerged as an effective platform for labeling data with relatively low cost by using non-expert workers.
In this paper, we consider a new model, called $d$-type specialization model, in which each task and worker has its own (unknown) type.
We propose label inference algorithms achieving the order-wise optimal limit even when the types of tasks or those of workers are unknown.
arXiv Detail & Related papers (2021-11-19T05:32:59Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Variational Bayesian Inference for Crowdsourcing Predictions [6.878219199575748]
We develop a variational Bayesian technique for two different worker noise models.
Our evaluations on synthetic and real-world datasets demonstrate that these approaches perform significantly better than existing non-Bayesian approaches.
arXiv Detail & Related papers (2020-06-01T08:11:50Z) - Crowdsourced Labeling for Worker-Task Specialization Model [14.315501760755605]
We consider crowdsourced labeling under a $d$-type worker-task specialization model.
We design an inference algorithm that recovers binary task labels by using worker clustering, worker skill estimation and weighted majority voting.
arXiv Detail & Related papers (2020-03-21T13:27:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.