Sample Weight Estimation Using Meta-Updates for Online Continual
Learning
- URL: http://arxiv.org/abs/2401.15973v1
- Date: Mon, 29 Jan 2024 09:04:45 GMT
- Title: Sample Weight Estimation Using Meta-Updates for Online Continual
Learning
- Authors: Hamed Hemati, Damian Borth
- Abstract summary: Online Meta-learning for Sample Importance (OMSI) strategy approximates sample weights for a mini-batch in an online CL stream.
OMSI enhances both learning and retained accuracy in a controlled noisy-labeled data stream.
- Score: 7.832189413179361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The loss function plays an important role in optimizing the performance of a
learning system. A crucial aspect of the loss function is the assignment of
sample weights within a mini-batch during loss computation. In the context of
continual learning (CL), most existing strategies uniformly treat samples when
calculating the loss value, thereby assigning equal weights to each sample.
While this approach can be effective in certain standard benchmarks, its
optimal effectiveness, particularly in more complex scenarios, remains
underexplored. This is particularly pertinent in training "in the wild," such
as with self-training, where labeling is automated using a reference model.
This paper introduces the Online Meta-learning for Sample Importance (OMSI)
strategy that approximates sample weights for a mini-batch in an online CL
stream using an inner- and meta-update mechanism. This is done by first
estimating sample weight parameters for each sample in the mini-batch, then,
updating the model with the adapted sample weights. We evaluate OMSI in two
distinct experimental settings. First, we show that OMSI enhances both learning
and retained accuracy in a controlled noisy-labeled data stream. Then, we test
the strategy in three standard benchmarks and compare it with other popular
replay-based strategies. This research aims to foster the ongoing exploration
in the area of self-adaptive CL.
Related papers
- Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Learning to Select Pivotal Samples for Meta Re-weighting [12.73177872962048]
We study how to learn to identify such a meta sample set from a large, imperfect training set, that is subsequently cleaned and used to optimize performance.
We propose two clustering methods within our learning framework, Representation-based clustering method (RBC) and Gradient-based clustering method (GBC)
arXiv Detail & Related papers (2023-02-09T03:04:40Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Delving into Sample Loss Curve to Embrace Noisy and Imbalanced Data [17.7825114228313]
Corrupted labels and class imbalance are commonly encountered in practically collected training data.
Existing approaches alleviate these issues by adopting a sample re-weighting strategy.
However, biased samples with corrupted labels and of tailed classes commonly co-exist in training data.
arXiv Detail & Related papers (2021-12-30T09:20:07Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.