Adaptive Anchor Label Propagation for Transductive Few-Shot Learning
- URL: http://arxiv.org/abs/2310.19996v1
- Date: Mon, 30 Oct 2023 20:29:31 GMT
- Title: Adaptive Anchor Label Propagation for Transductive Few-Shot Learning
- Authors: Michalis Lazarou, Yannis Avrithis, Guangyu Ren, Tania Stathaki
- Abstract summary: Few-shot learning addresses the issue of classifying images using limited labeled data.
We propose a novel algorithm that adapts the feature embeddings of the labeled data by minimizing a differentiable loss function.
Our algorithm outperforms the standard label propagation algorithm by as much as 7% and 2% in the 1-shot and 5-shot settings respectively.
- Score: 18.29463308334406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning addresses the issue of classifying images using limited
labeled data. Exploiting unlabeled data through the use of transductive
inference methods such as label propagation has been shown to improve the
performance of few-shot learning significantly. Label propagation infers
pseudo-labels for unlabeled data by utilizing a constructed graph that exploits
the underlying manifold structure of the data. However, a limitation of the
existing label propagation approaches is that the positions of all data points
are fixed and might be sub-optimal so that the algorithm is not as effective as
possible. In this work, we propose a novel algorithm that adapts the feature
embeddings of the labeled data by minimizing a differentiable loss function
optimizing their positions in the manifold in the process. Our novel algorithm,
Adaptive Anchor Label Propagation}, outperforms the standard label propagation
algorithm by as much as 7% and 2% in the 1-shot and 5-shot settings
respectively. We provide experimental results highlighting the merits of our
algorithm on four widely used few-shot benchmark datasets, namely miniImageNet,
tieredImageNet, CUB and CIFAR-FS and two commonly used backbones, ResNet12 and
WideResNet-28-10. The source code can be found at
https://github.com/MichalisLazarou/A2LP.
Related papers
- Learning with Noisy Labels: Interconnection of Two
Expectation-Maximizations [41.65589788264123]
Labor-intensive labeling becomes a bottleneck in developing computer vision algorithms based on deep learning.
We address learning with noisy labels (LNL) problem, which is formalized as a task of finding a structured manifold in the midst of noisy data.
Our algorithm achieves state-of-the-art performance in multiple standard benchmarks with substantial margins under various types of label noise.
arXiv Detail & Related papers (2024-01-09T07:22:30Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - PointMatch: A Consistency Training Framework for Weakly Supervised
Semantic Segmentation of 3D Point Clouds [117.77841399002666]
We propose a novel framework, PointMatch, that stands on both data and label, by applying consistency regularization to sufficiently probe information from data itself.
The proposed PointMatch achieves the state-of-the-art performance under various weakly-supervised schemes on both ScanNet-v2 and S3DIS datasets.
arXiv Detail & Related papers (2022-02-22T07:26:31Z) - SparseDet: Improving Sparsely Annotated Object Detection with
Pseudo-positive Mining [76.95808270536318]
We propose an end-to-end system that learns to separate proposals into labeled and unlabeled regions using Pseudo-positive mining.
While the labeled regions are processed as usual, self-supervised learning is used to process the unlabeled regions.
We conduct exhaustive experiments on five splits on the PASCAL-VOC and COCO datasets achieving state-of-the-art performance.
arXiv Detail & Related papers (2022-01-12T18:57:04Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised
Classification [24.386165255835063]
A common classification task situation is where one has a large amount of data available for training, but only a small portion is with class labels.
The goal of semi-supervised training, in this context, is to improve classification accuracy by leverage information from a large amount of unlabeled data.
We propose a novel unsupervised objective that focuses on the less studied relationship between the high confidence unlabeled data that are similar to each other.
Our proposed SimPLE algorithm shows significant performance gains over previous algorithms on CIFAR-100 and Mini-ImageNet, and is on par with the state-of-the-art methods
arXiv Detail & Related papers (2021-03-30T23:48:06Z) - How to distribute data across tasks for meta-learning? [59.608652082495624]
We show that the optimal number of data points per task depends on the budget, but it converges to a unique constant value for large budgets.
Our results suggest a simple and efficient procedure for data collection.
arXiv Detail & Related papers (2021-03-15T15:38:47Z) - Iterative label cleaning for transductive and semi-supervised few-shot
learning [16.627512688664513]
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks may be solved with both supervision and data being limited.
We introduce a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict pseudo-labels.
Our solution surpasses or matches the state of the art results on four benchmark datasets.
arXiv Detail & Related papers (2020-12-14T21:54:11Z) - Reliable Label Bootstrapping for Semi-Supervised Learning [19.841733658911767]
ReLaB is an unsupervised preprossessing algorithm which improves the performance of semi-supervised algorithms in extremely low supervision settings.
We show that the selection of the network architecture and the self-supervised algorithm are important factors to achieve successful label propagation.
We reach average error rates of $boldsymbol22.34$ with 1 random labeled sample per class on CIFAR-10 and lower this error to $boldsymbol8.46$ when the labeled sample in each class is highly representative.
arXiv Detail & Related papers (2020-07-23T08:51:37Z) - Sequential Graph Convolutional Network for Active Learning [53.99104862192055]
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN)
With a small number of randomly sampled images as seed labelled examples, we learn the parameters of the graph to distinguish labelled vs unlabelled nodes.
We exploit these characteristics of GCN to select the unlabelled examples which are sufficiently different from labelled ones.
arXiv Detail & Related papers (2020-06-18T00:55:10Z) - Instance Credibility Inference for Few-Shot Learning [45.577880041135785]
Few-shot learning aims to recognize new objects with extremely limited training data for each category.
This paper presents a simple statistical approach, dubbed Instance Credibility Inference (ICI) to exploit the distribution support of unlabeled instances for few-shot learning.
Our simple approach can establish new state-of-the-arts on four widely used few-shot learning benchmark datasets.
arXiv Detail & Related papers (2020-03-26T12:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.