Test-time Training for Data-efficient UCDR
- URL: http://arxiv.org/abs/2208.09198v3
- Date: Tue, 11 Apr 2023 06:17:04 GMT
- Title: Test-time Training for Data-efficient UCDR
- Authors: Soumava Paul, Titir Dutta, Aheli Saha, Abhishek Samanta, Soma Biswas
- Abstract summary: Universal Cross-domain Retrieval protocol is a pioneer in this field.
In this work, we explore the generalized retrieval problem in a data-efficient manner.
- Score: 22.400837122986175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image retrieval under generalized test scenarios has gained significant
momentum in literature, and the recently proposed protocol of Universal
Cross-domain Retrieval is a pioneer in this direction. A common practice in any
such generalized classification or retrieval algorithm is to exploit samples
from many domains during training to learn a domain-invariant representation of
data. Such criterion is often restrictive, and thus in this work, for the first
time, we explore the generalized retrieval problem in a data-efficient manner.
Specifically, we aim to generalize any pre-trained cross-domain retrieval
network towards any unknown query domain/category, by means of adapting the
model on the test data leveraging self-supervised learning techniques. Toward
that goal, we explored different self-supervised loss functions~(for example,
RotNet, JigSaw, Barlow Twins, etc.) and analyze their effectiveness for the
same. Extensive experiments demonstrate the proposed approach is simple, easy
to implement, and effective in handling data-efficient UCDR.
Related papers
- Generalization Capabilities of Neural Cellular Automata for Medical Image Segmentation: A Robust and Lightweight Approach [6.537479355990391]
U-Nets exhibit a significant decline in performance when tested on data that deviates from the training distribution.
This paper investigates the implications of utilizing models that are smaller by three orders of magnitude (i.e., x1000) compared to a conventional U-Net.
arXiv Detail & Related papers (2024-08-28T06:18:55Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Single Domain Generalization via Normalised Cross-correlation Based
Convolutions [14.306250516592304]
Single Domain Generalization aims to train robust models using data from a single source.
We propose a novel operator called XCNorm that computes the normalized cross-correlation between weights and an input feature patch.
We show that deep neural networks composed of this operator are robust to common semantic distribution shifts.
arXiv Detail & Related papers (2023-07-12T04:15:36Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Generalizable Metric Network for Cross-domain Person Re-identification [55.71632958027289]
Cross-domain (i.e., domain generalization) scene presents a challenge in Re-ID tasks.
Most existing methods aim to learn domain-invariant or robust features for all domains.
We propose a Generalizable Metric Network (GMN) to explore sample similarity in the sample-pair space.
arXiv Detail & Related papers (2023-06-21T03:05:25Z) - To Adapt or to Annotate: Challenges and Interventions for Domain
Adaptation in Open-Domain Question Answering [46.403929561360485]
We study end-to-end model performance of open-domain question answering (ODQA)
We find that not only do models fail to generalize, but high retrieval scores often still yield poor answer prediction accuracy.
We propose and evaluate several intervention methods which improve end-to-end answer F1 score by up to 24 points.
arXiv Detail & Related papers (2022-12-20T16:06:09Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Semi-Supervised Domain Generalizable Person Re-Identification [74.75528879336576]
Existing person re-identification (re-id) methods are stuck when deployed to a new unseen scenario.
Recent efforts have been devoted to domain adaptive person re-id where extensive unlabeled data in the new scenario are utilized in a transductive learning manner.
We aim to explore multiple labeled datasets to learn generalized domain-invariant representations for person re-id.
arXiv Detail & Related papers (2021-08-11T06:08:25Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.