Learning from Imperfect Annotations
- URL: http://arxiv.org/abs/2004.03473v1
- Date: Tue, 7 Apr 2020 15:21:08 GMT
- Title: Learning from Imperfect Annotations
- Authors: Emmanouil Antonios Platanios and Maruan Al-Shedivat and Eric Xing and
Tom Mitchell
- Abstract summary: Many machine learning systems today are trained on large amounts of human-annotated data.
We propose a new end-to-end framework that enables us to merge the aggregation step with model training.
We show accuracy gains of up to 25% over the current state-of-the-art approaches for aggregating annotations.
- Score: 15.306536555936692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning systems today are trained on large amounts of
human-annotated data. Data annotation tasks that require a high level of
competency make data acquisition expensive, while the resulting labels are
often subjective, inconsistent, and may contain a variety of human biases. To
improve the data quality, practitioners often need to collect multiple
annotations per example and aggregate them before training models. Such a
multi-stage approach results in redundant annotations and may often produce
imperfect "ground truth" that may limit the potential of training accurate
machine learning models. We propose a new end-to-end framework that enables us
to: (i) merge the aggregation step with model training, thus allowing deep
learning systems to learn to predict ground truth estimates directly from the
available data, and (ii) model difficulties of examples and learn
representations of the annotators that allow us to estimate and take into
account their competencies. Our approach is general and has many applications,
including training more accurate models on crowdsourced data, ensemble
learning, as well as classifier accuracy estimation from unlabeled data. We
conduct an extensive experimental evaluation of our method on 5 crowdsourcing
datasets of varied difficulty and show accuracy gains of up to 25% over the
current state-of-the-art approaches for aggregating annotations, as well as
significant reductions in the required annotation redundancy.
Related papers
- Complementary Learning for Real-World Model Failure Detection [15.779651238128562]
We introduce complementary learning, where we use learned characteristics from different training paradigms to detect model errors.
We demonstrate our approach by learning semantic and predictive motion labels in point clouds in a supervised and self-supervised manner.
We perform a large-scale qualitative analysis and present LidarCODA, the first dataset with labeled anomalies in lidar point clouds.
arXiv Detail & Related papers (2024-07-19T13:36:35Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Evaluating and Crafting Datasets Effective for Deep Learning With Data
Maps [0.0]
Training on large datasets often requires excessive system resources and an infeasible amount of time.
For supervised learning, large datasets require more time for manually labeling samples.
We propose a method of curating smaller datasets with comparable out-of-distribution model accuracy after an initial training session.
arXiv Detail & Related papers (2022-08-22T03:30:18Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Dynamic Supervisor for Cross-dataset Object Detection [52.95818230087297]
Cross-dataset training in object detection tasks is complicated because the inconsistency in the category range across datasets transforms fully supervised learning into semi-supervised learning.
We propose a dynamic supervisor framework that updates the annotations multiple times through multiple-updated submodels trained using hard and soft labels.
In the final generated annotations, both recall and precision improve significantly through the integration of hard-label training with soft-label training.
arXiv Detail & Related papers (2022-04-01T03:18:46Z) - Semi-supervised Deep Learning for Image Classification with Distribution
Mismatch: A Survey [1.5469452301122175]
Deep learning models rely on the abundance of labelled observations to train a prospective model.
It is expensive to gather labelled observations of data, making the usage of deep learning models not ideal.
In many situations different unlabelled data sources might be available.
This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets.
arXiv Detail & Related papers (2022-03-01T02:46:00Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - A Systematic Evaluation of Transfer Learning and Pseudo-labeling with
BERT-based Ranking Models [2.0498977512661267]
We evaluate transferability of BERT-based neural ranking models across five English datasets.
Each of our collections has a substantial number of queries, which enables a full-shot evaluation mode.
We find that training on pseudo-labels can produce a competitive or better model compared to transfer learning.
arXiv Detail & Related papers (2021-03-04T21:08:06Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.