Using Early Readouts to Mediate Featural Bias in Distillation
- URL: http://arxiv.org/abs/2310.18590v2
- Date: Wed, 8 Nov 2023 13:13:13 GMT
- Title: Using Early Readouts to Mediate Featural Bias in Distillation
- Authors: Rishabh Tiwari, Durga Sivasubramanian, Anmol Mekala, Ganesh
Ramakrishnan, Pradeep Shenoy
- Abstract summary: Deep networks tend to learn spurious feature-label correlations in real-world supervised learning tasks.
We propose a novel early readout mechanism whereby we attempt to predict the label using representations from earlier network layers.
- Score: 30.5299408494168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep networks tend to learn spurious feature-label correlations in real-world
supervised learning tasks. This vulnerability is aggravated in distillation,
where a student model may have lesser representational capacity than the
corresponding teacher model. Often, knowledge of specific spurious correlations
is used to reweight instances & rebalance the learning process. We propose a
novel early readout mechanism whereby we attempt to predict the label using
representations from earlier network layers. We show that these early readouts
automatically identify problem instances or groups in the form of confident,
incorrect predictions. Leveraging these signals to modulate the distillation
loss on an instance level allows us to substantially improve not only group
fairness measures across benchmark datasets, but also overall accuracy of the
student model. We also provide secondary analyses that bring insight into the
role of feature learning in supervision and distillation.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Using Positive Matching Contrastive Loss with Facial Action Units to
mitigate bias in Facial Expression Recognition [6.015556590955814]
We propose to mitigate bias by guiding the model's focus towards task-relevant features using domain knowledge.
We show that incorporating task-relevant features via our method can improve model fairness at minimal cost to classification performance.
arXiv Detail & Related papers (2023-03-08T21:28:02Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Probing Representation Forgetting in Supervised and Unsupervised
Continual Learning [14.462797749666992]
Catastrophic forgetting is associated with an abrupt loss of knowledge previously learned by a model.
We show that representation forgetting can lead to new insights on the effect of model capacity and loss function used in continual learning.
arXiv Detail & Related papers (2022-03-24T23:06:08Z) - Task-Agnostic Robust Representation Learning [31.818269301504564]
We study the problem of robust representation learning with unlabeled data in a task-agnostic manner.
We derive an upper bound on the adversarial loss of a prediction model on any downstream task, using its loss on the clean data and a robustness regularizer.
Our method achieves preferable adversarial performance compared to relevant baselines.
arXiv Detail & Related papers (2022-03-15T02:05:11Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.