MA 3 : Model Agnostic Adversarial Augmentation for Few Shot learning
- URL: http://arxiv.org/abs/2004.05100v1
- Date: Fri, 10 Apr 2020 16:35:49 GMT
- Title: MA 3 : Model Agnostic Adversarial Augmentation for Few Shot learning
- Authors: Rohit Jena, Shirsendu Sukanta Halder, Katia Sycara
- Abstract summary: In this paper, we explore the domain of few-shot learning with a novel augmentation technique.
Our technique is fully differentiable which enables its extension to versatile data-sets and base models.
We obtain an improvement of nearly 4% by adding our augmentation module without making any change in network architectures.
- Score: 5.854757988966379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent developments in vision-related problems using deep neural
networks, there still remains a wide scope in the improvement of generalizing
these models to unseen examples. In this paper, we explore the domain of
few-shot learning with a novel augmentation technique. In contrast to other
generative augmentation techniques, where the distribution over input images
are learnt, we propose to learn the probability distribution over the image
transformation parameters which are easier and quicker to learn. Our technique
is fully differentiable which enables its extension to versatile data-sets and
base models. We evaluate our proposed method on multiple base-networks and 2
data-sets to establish the robustness and efficiency of this method. We obtain
an improvement of nearly 4% by adding our augmentation module without making
any change in network architectures. We also make the code readily available
for usage by the community.
Related papers
- AugDiff: Diffusion based Feature Augmentation for Multiple Instance
Learning in Whole Slide Image [15.180437840817788]
Multiple Instance Learning (MIL), a powerful strategy for weakly supervised learning, is able to perform various prediction tasks on gigapixel Whole Slide Images (WSIs)
We introduce the Diffusion Model (DM) into MIL for the first time and propose a feature augmentation framework called AugDiff.
We conduct extensive experiments over three distinct cancer datasets, two different feature extractors, and three prevalent MIL algorithms to evaluate the performance of AugDiff.
arXiv Detail & Related papers (2023-03-11T10:36:27Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Activating More Pixels in Image Super-Resolution Transformer [53.87533738125943]
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution.
We propose a novel Hybrid Attention Transformer (HAT) to activate more input pixels for better reconstruction.
Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
arXiv Detail & Related papers (2022-05-09T17:36:58Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Image Enhancement via Bilateral Learning [1.4213973379473654]
This paper presents an image enhancement system based on convolutional neural networks.
Our goal is to make an effective use of two approaches, convolutional neural network and bilateral grid.
The enhancement results produced by our proposed method, while incorporating 5 different experts, show both quantitative and qualitative improvements.
arXiv Detail & Related papers (2021-12-07T18:30:15Z) - Flexible Example-based Image Enhancement with Task Adaptive Global
Feature Self-Guided Network [162.14579019053804]
We show that our model outperforms the current state of the art in learning a single enhancement mapping.
The model achieves even higher performance on learning multiple mappings simultaneously.
arXiv Detail & Related papers (2020-05-13T22:45:07Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Diversity Helps: Unsupervised Few-shot Learning via Distribution
Shift-based Data Augmentation [21.16237189370515]
Few-shot learning aims to learn a new concept when only a few training examples are available.
In this paper, we develop a novel framework called Unsupervised Few-shot Learning via Distribution Shift-based Data Augmentation.
In experiments, few-shot models learned by ULDA can achieve superior generalization performance.
arXiv Detail & Related papers (2020-04-13T07:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.