AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning
- URL: http://arxiv.org/abs/2002.12641v2
- Date: Mon, 9 Mar 2020 08:05:17 GMT
- Title: AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning
- Authors: Jianhong Zhang, Manli Zhang, Zhiwu Lu, Tao Xiang and Jirong Wen
- Abstract summary: We propose a new few-shot fewshot learning setting termed FSFSL.
Under FSFSL, both the source and target classes have limited training samples.
We also propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove irrelevant images.
- Score: 112.95742995816367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing few-shot learning (FSL) methods assume that there exist sufficient
training samples from source classes for knowledge transfer to target classes
with few training samples. However, this assumption is often invalid,
especially when it comes to fine-grained recognition. In this work, we define a
new FSL setting termed few-shot fewshot learning (FSFSL), under which both the
source and target classes have limited training samples. To overcome the source
class data scarcity problem, a natural option is to crawl images from the web
with class names as search keywords. However, the crawled images are inevitably
corrupted by large amount of noise (irrelevant images) and thus may harm the
performance. To address this problem, we propose a graph convolutional network
(GCN)-based label denoising (LDN) method to remove the irrelevant images.
Further, with the cleaned web images as well as the original clean training
images, we propose a GCN-based FSL method. For both the LDN and FSL tasks, a
novel adaptive aggregation GCN (AdarGCN) model is proposed, which differs from
existing GCN models in that adaptive aggregation is performed based on a
multi-head multi-level aggregation module. With AdarGCN, how much and how far
information carried by each graph node is propagated in the graph structure can
be determined automatically, therefore alleviating the effects of both noisy
and outlying training samples. Extensive experiments show the superior
performance of our AdarGCN under both the new FSFSL and the conventional FSL
settings.
Related papers
- Erasing the Bias: Fine-Tuning Foundation Models for Semi-Supervised Learning [4.137391543972184]
Semi-supervised learning (SSL) has witnessed remarkable progress, resulting in numerous method variations.
In this paper, we present a novel SSL approach named FineSSL that significantly addresses this limitation by adapting pre-trained foundation models.
We demonstrate that FineSSL sets a new state of the art for SSL on multiple benchmark datasets, reduces the training cost by over six times, and can seamlessly integrate various fine-tuning and modern SSL algorithms.
arXiv Detail & Related papers (2024-05-20T03:33:12Z) - FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time Augmentation [7.477118370563593]
Few-shot-learning (FSL) commonly requires a model to identify images (queries) that belong to classes unseen during training.
We generate additional test-class samples by combining original samples with suitable train-class samples via a generative image combiner.
We obtain averaged features via an augmentor, which leads to more typical representations through the averaging.
arXiv Detail & Related papers (2024-02-28T12:37:30Z) - GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation [5.049466204159458]
Self-supervised learning (SSL) is an alternative paradigm that provides some respite by constructing models utilizing only the unannotated data.
In this paper, we propose an SSL approach for segmenting histopathological images via generative diffusion models.
Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task.
arXiv Detail & Related papers (2023-09-04T09:49:24Z) - ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving
Few-Shot Learning [16.859375666701]
We propose to augment the few-shot learning objective with a novel self-supervised Episodic Spatial Pretext Task (ESPT)
Our ESPT objective is defined as maximizing the local spatial relationship consistency between the original episode and the transformed one.
Our ESPT method achieves new state-of-the-art performance for few-shot image classification on three mainstay benchmark datasets.
arXiv Detail & Related papers (2023-04-26T04:52:08Z) - Self Supervised Learning for Few Shot Hyperspectral Image Classification [57.2348804884321]
We propose to leverage Self Supervised Learning (SSL) for HSI classification.
We show that by pre-training an encoder on unlabeled pixels using Barlow-Twins, a state-of-the-art SSL algorithm, we can obtain accurate models with a handful of labels.
arXiv Detail & Related papers (2022-06-24T07:21:53Z) - DATA: Domain-Aware and Task-Aware Pre-training [94.62676913928831]
We present DATA, a simple yet effective NAS approach specialized for self-supervised learning (SSL)
Our method achieves promising results across a wide range of computation costs on downstream tasks, including image classification, object detection and semantic segmentation.
arXiv Detail & Related papers (2022-03-17T02:38:49Z) - Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning [95.78635058475439]
Cross-domain few-shot learning aims at transferring knowledge from general nature images to novel domain-specific target categories.
This paper studies the problem of CD-FSL by spanning the style distributions of the source dataset.
To make our model robust to visual styles, the source images are augmented by swapping the styles of their low-frequency components with each other.
arXiv Detail & Related papers (2022-03-15T05:36:41Z) - Self-Supervised Learning of Graph Neural Networks: A Unified Review [50.71341657322391]
Self-supervised learning is emerging as a new paradigm for making use of large amounts of unlabeled samples.
We provide a unified review of different ways of training graph neural networks (GNNs) using SSL.
Our treatment of SSL methods for GNNs sheds light on the similarities and differences of various methods, setting the stage for developing new methods and algorithms.
arXiv Detail & Related papers (2021-02-22T03:43:45Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z) - TAFSSL: Task-Adaptive Feature Sub-Space Learning for few-shot
classification [50.358839666165764]
We show that the Task-Adaptive Feature Sub-Space Learning (TAFSSL) can significantly boost the performance in Few-Shot Learning scenarios.
Specifically, we show that on the challenging miniImageNet and tieredImageNet benchmarks, TAFSSL can improve the current state-of-the-art in both transductive and semi-supervised FSL settings by more than $5%$.
arXiv Detail & Related papers (2020-03-14T16:59:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.