Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction
for Few-Shot Classification
- URL: http://arxiv.org/abs/2106.11486v1
- Date: Tue, 22 Jun 2021 02:25:01 GMT
- Title: Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction
for Few-Shot Classification
- Authors: Dong Hoon Lee, Sae-Young Chung
- Abstract summary: We propose unsupervised embedding adaptation for the downstream few-shot classification task.
Based on findings that deep neural networks learn to generalize before memorizing, we develop Early-Stage Feature Reconstruction.
- Score: 26.147365494401406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose unsupervised embedding adaptation for the downstream few-shot
classification task. Based on findings that deep neural networks learn to
generalize before memorizing, we develop Early-Stage Feature Reconstruction
(ESFR) -- a novel adaptation scheme with feature reconstruction and
dimensionality-driven early stopping that finds generalizable features.
Incorporating ESFR consistently improves the performance of baseline methods on
all standard settings, including the recently proposed transductive method.
ESFR used in conjunction with the transductive method further achieves
state-of-the-art performance on mini-ImageNet, tiered-ImageNet, and CUB;
especially with 1.2%~2.0% improvements in accuracy over the previous best
performing method on 1-shot setting.
Related papers
- Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Adaptive Guidance: Training-free Acceleration of Conditional Diffusion
Models [44.58960475893552]
"Adaptive Guidance" (AG) is an efficient variant of computation-Free Guidance (CFG)
AG preserves CFG's image quality while reducing by 25%.
" LinearAG" offers even cheaper inference at the cost of deviating from the baseline model.
arXiv Detail & Related papers (2023-12-19T17:08:48Z) - Differentially private training of residual networks with scale
normalisation [64.60453677988517]
We investigate the optimal choice of replacement layer for Batch Normalisation (BN) in residual networks (ResNets)
We study the phenomenon of scale mixing in residual blocks, whereby the activations on the two branches are scaled differently.
arXiv Detail & Related papers (2022-03-01T09:56:55Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Initialization and Regularization of Factorized Neural Layers [23.875225732697142]
We show how to initialize and regularize Factorized layers in deep nets.
We show how these schemes lead to improved performance on both translation and unsupervised pre-training.
arXiv Detail & Related papers (2021-05-03T17:28:07Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Weighted Aggregating Stochastic Gradient Descent for Parallel Deep
Learning [8.366415386275557]
Solution involves a reformation of the objective function for optimization in neural network models.
We introduce a decentralized weighted aggregating scheme based on the performance of local workers.
To validate the new method, we benchmark our schemes against several popular algorithms.
arXiv Detail & Related papers (2020-04-07T23:38:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.