Task Adaptive Feature Transformation for One-Shot Learning
- URL: http://arxiv.org/abs/2304.06832v1
- Date: Thu, 13 Apr 2023 21:52:51 GMT
- Title: Task Adaptive Feature Transformation for One-Shot Learning
- Authors: Imtiaz Masud Ziko, Freddy Lecue and Ismail Ben Ayed
- Abstract summary: We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks.
We show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods.
- Score: 21.20683465652298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a simple non-linear embedding adaptation layer, which is
fine-tuned on top of fixed pre-trained features for one-shot tasks, improving
significantly transductive entropy-based inference for low-shot regimes. Our
norm-induced transformation could be understood as a re-parametrization of the
feature space to disentangle the representations of different classes in a task
specific manner. It focuses on the relevant feature dimensions while hindering
the effects of non-relevant dimensions that may cause overfitting in a one-shot
setting. We also provide an interpretation of our proposed feature
transformation in the basic case of few-shot inference with K-means clustering.
Furthermore, we give an interesting bound-optimization link between K-means and
entropy minimization. This emphasizes why our feature transformation is useful
in the context of entropy minimization. We report comprehensive experiments,
which show consistent improvements over a variety of one-shot benchmarks,
outperforming recent state-of-the-art methods.
Related papers
- PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE is a self-supervised learning framework that enhances global feature representation of point cloud mask autoencoders.
We show that PseudoNeg-MAE achieves state-of-the-art performance on the ModelNet40 and ScanObjectNN datasets.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Enabling Tensor Decomposition for Time-Series Classification via A Simple Pseudo-Laplacian Contrast [26.28414569796961]
We propose a novel Pseudo Laplacian Contrast (PLC) tensor decomposition framework.
It integrates the data augmentation and cross-view Laplacian to enable the extraction of class-aware representations.
Experiments on various datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-23T16:48:13Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - Deterministic Decoupling of Global Features and its Application to Data
Analysis [0.0]
We propose a new formalism that is based on defining transformations on submanifolds.
Through these transformations we define a normalization that, we demonstrate, allows for decoupling differentiable features.
We apply this method in the original data domain and at the output of a filter bank to regression and classification problems based on global descriptors.
arXiv Detail & Related papers (2022-07-05T15:54:39Z) - Neural TMDlayer: Modeling Instantaneous flow of features via SDE
Generators [37.92379202320938]
We study how differential equation (SDE) based ideas can inspire new modifications to existing algorithms for a set of problems in computer vision.
We show promising experiments on a number of vision tasks including few shot learning, point cloud transformers and deep variational segmentation.
arXiv Detail & Related papers (2021-08-19T19:54:04Z) - Dynamic Feature Regularized Loss for Weakly Supervised Semantic
Segmentation [37.43674181562307]
We propose a new regularized loss which utilizes both shallow and deep features that are dynamically updated.
Our approach achieves new state-of-the-art performances, outperforming other approaches by a significant margin with more than 6% mIoU increase.
arXiv Detail & Related papers (2021-08-03T05:11:00Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Invariant Integration in Deep Convolutional Feature Space [77.99182201815763]
We show how to incorporate prior knowledge to a deep neural network architecture in a principled manner.
We report state-of-the-art performance on the Rotated-MNIST dataset.
arXiv Detail & Related papers (2020-04-20T09:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.