Uncertainty-Aware Distillation for Semi-Supervised Few-Shot
Class-Incremental Learning
- URL: http://arxiv.org/abs/2301.09964v1
- Date: Tue, 24 Jan 2023 12:53:06 GMT
- Title: Uncertainty-Aware Distillation for Semi-Supervised Few-Shot
Class-Incremental Learning
- Authors: Yawen Cui, Wanxia Deng, Haoyu Chen, and Li Liu
- Abstract summary: We present a framework named Uncertainty-aware Distillation with Class-Equilibrium (UaD-CE)
We introduce the CE module that employs a class-balanced self-training to avoid the gradual dominance of easy-to-classified classes on pseudo-label generation.
Comprehensive experiments on three benchmark datasets demonstrate that our method can boost the adaptability of unlabeled data.
- Score: 16.90277839119862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a model well-trained with a large-scale base dataset, Few-Shot
Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes
from a few labeled samples by avoiding overfitting, without catastrophically
forgetting all encountered classes previously. Currently, semi-supervised
learning technique that harnesses freely-available unlabeled data to compensate
for limited labeled data can boost the performance in numerous vision tasks,
which heuristically can be applied to tackle issues in FSCIL, i.e., the
Semi-supervised FSCIL (Semi-FSCIL). So far, very limited work focuses on the
Semi-FSCIL task, leaving the adaptability issue of semi-supervised learning to
the FSCIL task unresolved. In this paper, we focus on this adaptability issue
and present a simple yet efficient Semi-FSCIL framework named Uncertainty-aware
Distillation with Class-Equilibrium (UaD-CE), encompassing two modules UaD and
CE. Specifically, when incorporating unlabeled data into each incremental
session, we introduce the CE module that employs a class-balanced self-training
to avoid the gradual dominance of easy-to-classified classes on pseudo-label
generation. To distill reliable knowledge from the reference model, we further
implement the UaD module that combines uncertainty-guided knowledge refinement
with adaptive distillation. Comprehensive experiments on three benchmark
datasets demonstrate that our method can boost the adaptability of unlabeled
data with the semi-supervised learning technique in FSCIL tasks.
Related papers
- SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation [14.782756931646627]
We introduce a semi-supervised fine-tuning framework named SemiEvol for LLM adaptation from a propagate-and-select manner.
For knowledge propagation, SemiEvol adopts a bi-level approach, propagating knowledge from labeled data to unlabeled data through both in-weight and in-context methods.
For knowledge selection, SemiEvol incorporates a collaborative learning mechanism, selecting higher-quality pseudo-response samples.
arXiv Detail & Related papers (2024-10-17T16:59:46Z) - TACLE: Task and Class-aware Exemplar-free Semi-supervised Class Incremental Learning [16.734025446561695]
We propose a novel TACLE framework to address the problem of exemplar-free semi-supervised class incremental learning.
In this scenario, at each new task, the model has to learn new classes from both labeled and unlabeled data.
In addition to leveraging the capabilities of pre-trained models, TACLE proposes a novel task-adaptive threshold.
arXiv Detail & Related papers (2024-07-10T20:46:35Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Few-Shot Class-Incremental Learning with Prior Knowledge [94.95569068211195]
We propose Learning with Prior Knowledge (LwPK) to enhance the generalization ability of the pre-trained model.
Experimental results indicate that LwPK effectively enhances the model resilience against catastrophic forgetting.
arXiv Detail & Related papers (2024-02-02T08:05:35Z) - Bias Mitigating Few-Shot Class-Incremental Learning [17.185744533050116]
Few-shot class-incremental learning aims at recognizing novel classes continually with limited novel class samples.
Recent methods somewhat alleviate the accuracy imbalance between base and incremental classes by fine-tuning the feature extractor in the incremental sessions.
We propose a novel method to mitigate model bias of the FSCIL problem during training and inference processes.
arXiv Detail & Related papers (2024-02-01T10:37:41Z) - Dynamic Sub-graph Distillation for Robust Semi-supervised Continual
Learning [52.046037471678005]
We focus on semi-supervised continual learning (SSCL), where the model progressively learns from partially labeled data with unknown categories.
We propose a novel approach called Dynamic Sub-Graph Distillation (DSGD) for semi-supervised continual learning.
arXiv Detail & Related papers (2023-12-27T04:40:12Z) - Learning in Imperfect Environment: Multi-Label Classification with
Long-Tailed Distribution and Partial Labels [53.68653940062605]
We introduce a novel task, Partial labeling and Long-Tailed Multi-Label Classification (PLT-MLC)
We find that most LT-MLC and PL-MLC approaches fail to solve the degradation-MLC.
We propose an end-to-end learning framework: textbfCOrrection $rightarrow$ textbfModificattextbfIon $rightarrow$ balantextbfCe.
arXiv Detail & Related papers (2023-04-20T20:05:08Z) - Adaptive Negative Evidential Deep Learning for Open-set Semi-supervised Learning [69.81438976273866]
Open-set semi-supervised learning (Open-set SSL) considers a more practical scenario, where unlabeled data and test data contain new categories (outliers) not observed in labeled data (inliers)
We introduce evidential deep learning (EDL) as an outlier detector to quantify different types of uncertainty, and design different uncertainty metrics for self-training and inference.
We propose a novel adaptive negative optimization strategy, making EDL more tailored to the unlabeled dataset containing both inliers and outliers.
arXiv Detail & Related papers (2023-03-21T09:07:15Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Rethinking Few-Shot Class-Incremental Learning with Open-Set Hypothesis
in Hyperbolic Geometry [21.38183613466714]
Few-Shot Class-Incremental Learning (FSCIL) aims at incrementally learning novel classes from a few labeled samples.
In this paper, we rethink the configuration of FSCIL with the open-set hypothesis by reserving the possibility in the first session for incoming categories.
To assign better performances on both close-set and open-set recognition to the model, Hyperbolic Reciprocal Point Learning module (Hyper-RPL) is built on Reciprocal Point Learning (RPL) with hyperbolic neural networks.
arXiv Detail & Related papers (2022-07-20T15:13:48Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.