Robust Learning from Discriminative Feature Feedback
- URL: http://arxiv.org/abs/2003.03946v1
- Date: Mon, 9 Mar 2020 06:45:52 GMT
- Title: Robust Learning from Discriminative Feature Feedback
- Authors: Sanjoy Dasgupta and Sivan Sabato
- Abstract summary: We introduce a more realistic, robust version of the framework, in which the annotator is allowed to make mistakes.
We show how such errors can be handled algorithmically, in both an adversarial and arimi setting.
- Score: 32.18449686637963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work introduced the model of learning from discriminative feature
feedback, in which a human annotator not only provides labels of instances, but
also identifies discriminative features that highlight important differences
between pairs of instances. It was shown that such feedback can be conducive to
learning, and makes it possible to efficiently learn some concept classes that
would otherwise be intractable. However, these results all relied upon perfect
annotator feedback. In this paper, we introduce a more realistic, robust
version of the framework, in which the annotator is allowed to make mistakes.
We show how such errors can be handled algorithmically, in both an adversarial
and a stochastic setting. In particular, we derive regret bounds in both
settings that, as in the case of a perfect annotator, are independent of the
number of features. We show that this result cannot be obtained by a naive
reduction from the robust setting to the non-robust setting.
Related papers
- Weakly-Supervised Contrastive Learning for Imprecise Class Labels [50.57424331797865]
We introduce the concept of continuous semantic similarity'' to define positive and negative pairs.<n>We propose a graph-theoretic framework for weakly-supervised contrastive learning.<n>Our framework is highly versatile and can be applied to many weakly-supervised learning scenarios.
arXiv Detail & Related papers (2025-05-28T06:50:40Z) - Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - ACTOR: Active Learning with Annotator-specific Classification Heads to
Embrace Human Label Variation [35.10805667891489]
Active learning, as an annotation cost-saving strategy, has not been fully explored in the context of learning from disagreement.
We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation.
arXiv Detail & Related papers (2023-10-23T14:26:43Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness
under Unawareness setting [6.004889078682389]
Current AI regulations require discarding sensitive features in the algorithm's decision-making process to prevent unfair outcomes.
We propose a way to reveal the potential hidden bias of a machine learning model that can persist even when sensitive features are discarded.
arXiv Detail & Related papers (2023-02-16T10:36:18Z) - Improved Robust Algorithms for Learning with Discriminative Feature
Feedback [21.58493386054356]
Discriminative Feature Feedback is a protocol for interactive learning based on feature explanations that are provided by a human teacher.
We provide new robust interactive learning algorithms for the Discriminative Feature Feedback model.
arXiv Detail & Related papers (2022-09-08T12:11:12Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Disjoint Contrastive Regression Learning for Multi-Sourced Annotations [10.159313152511919]
Large-scale datasets are important for the development of deep learning models.
Multiple annotators may be employed to label different subsets of the data.
The inconsistency and bias among different annotators are harmful to the model training.
arXiv Detail & Related papers (2021-12-31T12:39:04Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Inverse Feature Learning: Feature learning based on Representation
Learning of Error [12.777440204911022]
This paper proposes inverse feature learning as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach.
The proposed method results in significantly better performance compared to the state-of-the-art classification techniques for several popular data sets.
arXiv Detail & Related papers (2020-03-08T00:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.