Semantically Proportional Patchmix for Few-Shot Learning
- URL: http://arxiv.org/abs/2202.08647v1
- Date: Thu, 17 Feb 2022 13:24:33 GMT
- Title: Semantically Proportional Patchmix for Few-Shot Learning
- Authors: Jingquan Wang, Jing Xu, Yu Pan, Zenglin Xu
- Abstract summary: Few-shot learning aims to classify unseen classes with only a limited number of labeled data.
Recent works have demonstrated that training models with a simple transfer learning strategy can achieve competitive results in few-shot classification.
We proposeSePPMix, in which patches are cut and pasted among training images and the ground truth labels are mixed proportionally to the semantic information of the patches.
- Score: 16.24173112047382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot learning aims to classify unseen classes with only a limited number
of labeled data. Recent works have demonstrated that training models with a
simple transfer learning strategy can achieve competitive results in few-shot
classification. Although excelling at distinguishing training data, these
models are not well generalized to unseen data, probably due to insufficient
feature representations on evaluation. To tackle this issue, we propose
Semantically Proportional Patchmix (SePPMix), in which patches are cut and
pasted among training images and the ground truth labels are mixed
proportionally to the semantic information of the patches. In this way, we can
improve the generalization ability of the model by regional dropout effect
without introducing severe label noise. To learn more robust representations of
data, we further take rotate transformation on the mixed images and predict
rotations as a rule-based regularizer. Extensive experiments on prevalent
few-shot benchmarks have shown the effectiveness of our proposed method.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Exploring Data Augmentations on Self-/Semi-/Fully- Supervised
Pre-trained Models [24.376036129920948]
We investigate how data augmentation affects performance of vision pre-trained models.
We apply 4 types of data augmentations termed with Random Erasing, CutOut, CutMix and MixUp.
We report their performance on vision tasks such as image classification, object detection, instance segmentation, and semantic segmentation.
arXiv Detail & Related papers (2023-10-28T23:46:31Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Supervised Contrastive Learning on Blended Images for Long-tailed
Recognition [32.876647081080655]
Real-world data often have a long-tailed distribution, where the number of samples per class is not equal over training classes.
In this paper, we propose a novel long-tailed recognition method to balance the latent feature space.
arXiv Detail & Related papers (2022-11-22T01:19:00Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Learning from Noisy Labels for Entity-Centric Information Extraction [17.50856935207308]
We propose a simple co-regularization framework for entity-centric information extraction.
These models are jointly optimized with task-specific loss, and are regularized to generate similar predictions.
In the end, we can take any of the trained models for inference.
arXiv Detail & Related papers (2021-04-17T22:49:12Z) - ReMix: Towards Image-to-Image Translation with Limited Data [154.71724970593036]
We propose a data augmentation method (ReMix) to tackle this issue.
We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples.
The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results.
arXiv Detail & Related papers (2021-03-31T06:24:10Z) - SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better
Regularization [9.126576583256506]
We propose SaliencyMix to improve the generalization ability of deep learning models.
SaliencyMix carefully selects a representative image patch with the help of a saliency map and mixes this indicative patch with the target image.
SaliencyMix achieves the best known top-1 error of 21.26% and 20.09% for ResNet-50 and ResNet-101 architectures on ImageNet classification.
arXiv Detail & Related papers (2020-06-02T17:18:34Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.