Black-box Unsupervised Domain Adaptation with Bi-directional
Atkinson-Shiffrin Memory
- URL: http://arxiv.org/abs/2308.13236v1
- Date: Fri, 25 Aug 2023 08:06:48 GMT
- Title: Black-box Unsupervised Domain Adaptation with Bi-directional
Atkinson-Shiffrin Memory
- Authors: Jingyi Zhang, Jiaxing Huang, Xueying Jiang, Shijian Lu
- Abstract summary: Black-box unsupervised domain adaptation (UDA) learns with source predictions of target data without accessing either source data or source models during training.
We propose BiMem, a bi-directional memorization mechanism that learns to remember useful and representative information to correct noisy pseudo labels on the fly.
BiMem achieves superior domain adaptation performance consistently across various visual recognition tasks such as image classification, semantic segmentation and object detection.
- Score: 59.51934126717572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box unsupervised domain adaptation (UDA) learns with source predictions
of target data without accessing either source data or source models during
training, and it has clear superiority in data privacy and flexibility in
target network selection. However, the source predictions of target data are
often noisy and training with them is prone to learning collapses. We propose
BiMem, a bi-directional memorization mechanism that learns to remember useful
and representative information to correct noisy pseudo labels on the fly,
leading to robust black-box UDA that can generalize across different visual
recognition tasks. BiMem constructs three types of memory, including sensory
memory, short-term memory, and long-term memory, which interact in a
bi-directional manner for comprehensive and robust memorization of learnt
features. It includes a forward memorization flow that identifies and stores
useful features and a backward calibration flow that rectifies features' pseudo
labels progressively. Extensive experiments show that BiMem achieves superior
domain adaptation performance consistently across various visual recognition
tasks such as image classification, semantic segmentation and object detection.
Related papers
- Saliency-Augmented Memory Completion for Continual Learning [8.243137410556495]
How to forget is a problem continual learning must address.
Our paper proposes a new saliency-augmented memory completion framework for continual learning.
arXiv Detail & Related papers (2022-12-26T18:06:39Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Memory-Guided Semantic Learning Network for Temporal Sentence Grounding [55.31041933103645]
We propose a memory-augmented network that learns and memorizes the rarely appeared content in TSG tasks.
MGSL-Net consists of three main parts: a cross-modal inter-action module, a memory augmentation module, and a heterogeneous attention module.
arXiv Detail & Related papers (2022-01-03T02:32:06Z) - Memory Wrap: a Data-Efficient and Interpretable Extension to Image
Classification Models [9.848884631714451]
Memory Wrap is a plug-and-play extension to any image classification model.
It improves both data-efficiency and model interpretability, adopting a content-attention mechanism.
We show that Memory Wrap outperforms standard classifiers when it learns from a limited set of data.
arXiv Detail & Related papers (2021-06-01T07:24:19Z) - Memory-Associated Differential Learning [10.332918082271153]
We propose a novel learning paradigm called Memory-Associated Differential (MAD) Learning.
We first introduce an additional component called Memory to memorize all the training data. Then we learn the differences of labels as well as the associations of features in the combination of a differential equation and some sampling methods.
In the evaluating phase, we predict unknown labels by inferencing from the memorized facts plus the learnt differences and associations in a geometrically meaningful manner.
arXiv Detail & Related papers (2021-02-10T03:48:12Z) - HM4: Hidden Markov Model with Memory Management for Visual Place
Recognition [54.051025148533554]
We develop a Hidden Markov Model approach for visual place recognition in autonomous driving.
Our algorithm, dubbed HM$4$, exploits temporal look-ahead to transfer promising candidate images between passive storage and active memory.
We show that this allows constant time and space inference for a fixed coverage area.
arXiv Detail & Related papers (2020-11-01T08:49:24Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z) - Memory-based Jitter: Improving Visual Recognition on Long-tailed Data
with Diversity In Memory [39.56214005885884]
We introduce a simple and reliable method named Memory-based Jitter (MBJ) to augment the tail classes with higher diversity.
MBJ is applicable for two fundamental visual recognition tasks, emphi.e., deep image classification and deep metric learning.
Experiments on five long-tailed classification benchmarks and two deep metric learning benchmarks demonstrate significant improvement.
arXiv Detail & Related papers (2020-08-22T11:01:46Z) - Learning and Memorizing Representative Prototypes for 3D Point Cloud
Semantic and Instance Segmentation [117.29799759864127]
3D point cloud semantic and instance segmentation is crucial and fundamental for 3D scene understanding.
Deep networks can easily forget the non-dominant cases during the learning process, resulting in unsatisfactory performance.
We propose a memory-augmented network to learn and memorize the representative prototypes that cover diverse samples universally.
arXiv Detail & Related papers (2020-01-06T01:07:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.