Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift
- URL: http://arxiv.org/abs/2206.05181v1
- Date: Fri, 10 Jun 2022 15:19:52 GMT
- Title: Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift
- Authors: Paulina Tomaszewska and Christoph H. Lampert
- Abstract summary: We introduce LIMES, a new method for learning with non-stationary streaming data.
We learn a single set of model parameters from which a specific classifier for any specific data distribution is derived.
Experiments on a set of exemplary tasks using Twitter data show that LIMES achieves higher accuracy than alternative approaches.
- Score: 27.806085423595334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce LIMES, a new method for learning with non-stationary streaming
data, inspired by the recent success of meta-learning. The main idea is not to
attempt to learn a single classifier that would have to work well across all
occurring data distributions, nor many separate classifiers, but to exploit a
hybrid strategy: we learn a single set of model parameters from which a
specific classifier for any specific data distribution is derived via
classifier adaptation. Assuming a multi-class classification setting with
class-prior shift, the adaptation step can be performed analytically with only
the classifier's bias terms being affected. Another contribution of our work is
an extrapolation step that predicts suitable adaptation parameters for future
time steps based on the previous data. In combination, we obtain a lightweight
procedure for learning from streaming data with varying class distribution that
adds no trainable parameters and almost no memory or computational overhead
compared to training a single model. Experiments on a set of exemplary tasks
using Twitter data show that LIMES achieves higher accuracy than alternative
approaches, especially with respect to the relevant real-world metric of lowest
within-day accuracy.
Related papers
- Geometry-Aware Adaptation for Pretrained Models [15.715395029966812]
We propose a drop-in replacement of the standard prediction rule, swapping argmax with the Fr'echet mean.
Our proposed approach, Loki, gains up to 29.7% relative improvement over SimCLR on ImageNet.
When no such metric is available, Loki can use self-derived metrics from class embeddings and obtains a 10.5% improvement on pretrained zero-shot models.
arXiv Detail & Related papers (2023-07-23T04:48:41Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Continual Learning with Optimal Transport based Mixture Model [17.398605698033656]
We propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM)
Our proposed method can significantly outperform the current state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-30T06:40:29Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Classifier Transfer with Data Selection Strategies for Online Support
Vector Machine Classification with Class Imbalance [1.2599533416395767]
We focus on data selection strategies which limit the size of the stored training data.
We show that by using the right combination of data selection criteria, it is possible to adapt the classifier and largely increase the performance.
arXiv Detail & Related papers (2022-08-10T02:36:20Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.