Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification
- URL: http://arxiv.org/abs/2311.02392v1
- Date: Sat, 4 Nov 2023 12:28:04 GMT
- Title: Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification
- Authors: Hao Zheng, Runqi Wang, Jianzhuang Liu, Asako Kanezaki
- Abstract summary: We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
- Score: 49.36348058247138
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The conventional few-shot classification aims at learning a model on a large
labeled base dataset and rapidly adapting to a target dataset that is from the
same distribution as the base dataset. However, in practice, the base and the
target datasets of few-shot classification are usually from different domains,
which is the problem of cross-domain few-shot classification. We tackle this
problem by making a small proportion of unlabeled images in the target domain
accessible in the training stage. In this setup, even though the base data are
sufficient and labeled, the large domain shift still makes transferring the
knowledge from the base dataset difficult. We meticulously design a cross-level
knowledge distillation method, which can strengthen the ability of the model to
extract more discriminative features in the target dataset by guiding the
network's shallow layers to learn higher-level information. Furthermore, in
order to alleviate the overfitting in the evaluation stage, we propose a
feature denoising operation which can reduce the feature redundancy and
mitigate overfitting. Our approach can surpass the previous state-of-the-art
method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot
classification tasks on average in the BSCD-FSL benchmark. The implementation
code will be available at https://github.com/jarucezh/cldfd.
Related papers
- Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with
Unlabeled Data [21.348965677980104]
We tackle the problem of cross-domain few-shot recognition with unlabeled target data.
STARTUP was the first method that tackles this problem using self-training.
We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset.
arXiv Detail & Related papers (2021-06-14T23:44:34Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Weak Adaptation Learning -- Addressing Cross-domain Data Insufficiency
with Weak Annotator [2.8672054847109134]
In some target problem domains, there are not many data samples available, which could hinder the learning process.
We propose a weak adaptation learning (WAL) approach that leverages unlabeled data from a similar source domain.
Our experiments demonstrate the effectiveness of our approach in learning an accurate classifier with limited labeled data in the target domain.
arXiv Detail & Related papers (2021-02-15T06:19:25Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - Weakly-supervised Object Localization for Few-shot Learning and
Fine-grained Few-shot Learning [0.5156484100374058]
Few-shot learning aims to learn novel visual categories from very few samples.
We propose a Self-Attention Based Complementary Module (SAC Module) to fulfill the weakly-supervised object localization.
We also produce the activated masks for selecting discriminative deep descriptors for few-shot classification.
arXiv Detail & Related papers (2020-03-02T14:07:05Z) - Reinforced active learning for image segmentation [34.096237671643145]
We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL)
An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled from a pool of unlabeled data.
Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems.
arXiv Detail & Related papers (2020-02-16T14:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.