GPM: A Generic Probabilistic Model to Recover Annotator's Behavior and
Ground Truth Labeling
- URL: http://arxiv.org/abs/2003.00475v1
- Date: Sun, 1 Mar 2020 12:14:52 GMT
- Title: GPM: A Generic Probabilistic Model to Recover Annotator's Behavior and
Ground Truth Labeling
- Authors: Jing Li, Suiyi Ling, Junle Wang, Zhi Li, Patrick Le Callet
- Abstract summary: We propose a probabilistic graphical annotation model to infer the underlying ground truth and annotator's behavior.
The proposed model is able to identify whether an annotator has worked diligently towards the task during the labeling procedure.
- Score: 34.48095564497967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the big data era, data labeling can be obtained through crowdsourcing.
Nevertheless, the obtained labels are generally noisy, unreliable or even
adversarial. In this paper, we propose a probabilistic graphical annotation
model to infer the underlying ground truth and annotator's behavior. To
accommodate both discrete and continuous application scenarios (e.g.,
classifying scenes vs. rating videos on a Likert scale), the underlying ground
truth is considered following a distribution rather than a single value. In
this way, the reliable but potentially divergent opinions from "good"
annotators can be recovered. The proposed model is able to identify whether an
annotator has worked diligently towards the task during the labeling procedure,
which could be used for further selection of qualified annotators. Our model
has been tested on both simulated data and real-world data, where it always
shows superior performance than the other state-of-the-art models in terms of
accuracy and robustness.
Related papers
- LiveXiv -- A Multi-Modal Live Benchmark Based on Arxiv Papers Content [62.816876067499415]
We propose LiveXiv: a scalable evolving live benchmark based on scientific ArXiv papers.
LiveXiv accesses domain-specific manuscripts at any given timestamp and proposes to automatically generate visual question-answer pairs.
We benchmark multiple open and proprietary Large Multi-modal Models (LMMs) on the first version of our benchmark, showing its challenging nature and exposing the models true abilities.
arXiv Detail & Related papers (2024-10-14T17:51:23Z) - Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks [9.110872603799839]
Supervised classification heavily depends on datasets annotated by humans.
In subjective tasks such as toxicity classification, these annotations often exhibit low agreement among raters.
In this work, we propose textbfAnnotator Awares for Texts (AART) for subjective classification tasks.
arXiv Detail & Related papers (2023-11-16T10:18:32Z) - Improving a Named Entity Recognizer Trained on Noisy Data with a Few
Clean Instances [55.37242480995541]
We propose to denoise noisy NER data with guidance from a small set of clean instances.
Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights.
Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.
arXiv Detail & Related papers (2023-10-25T17:23:37Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - SeedBERT: Recovering Annotator Rating Distributions from an Aggregated
Label [43.23903984174963]
We propose SeedBERT, a method for recovering annotator rating distributions from a single label.
Our human evaluations indicate that SeedBERT's attention mechanism is consistent with human sources of annotator disagreement.
arXiv Detail & Related papers (2022-11-23T18:35:15Z) - Going Beyond One-Hot Encoding in Classification: Can Human Uncertainty
Improve Model Performance? [14.610038284393166]
We show that label uncertainty is explicitly embedded into the training process via distributional labels.
The incorporation of label uncertainty helps the model to generalize better to unseen data and increases model performance.
Similar to existing calibration methods, the distributional labels lead to better-calibrated probabilities, which in turn yield more certain and trustworthy predictions.
arXiv Detail & Related papers (2022-05-30T17:19:11Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Group Fairness by Probabilistic Modeling with Latent Fair Decisions [36.20281545470954]
This paper studies learning fair probability distributions from biased data by explicitly modeling a latent variable that represents a hidden, unbiased label.
We aim to achieve demographic parity by enforcing certain independencies in the learned model.
We also show that group fairness guarantees are meaningful only if the distribution used to provide those guarantees indeed captures the real-world data.
arXiv Detail & Related papers (2020-09-18T19:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.