Hierarchical Variational Memory for Few-shot Learning Across Domains
- URL: http://arxiv.org/abs/2112.08181v1
- Date: Wed, 15 Dec 2021 15:01:29 GMT
- Title: Hierarchical Variational Memory for Few-shot Learning Across Domains
- Authors: Yingjun Du, Xiantong Zhen, Ling Shao, Cees G. M. Snoek
- Abstract summary: We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory.
The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand.
We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model.
- Score: 120.87679627651153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural memory enables fast adaptation to new tasks with just a few training
samples. Existing memory models store features only from the single last layer,
which does not generalize well in presence of a domain shift between training
and test distributions. Rather than relying on a flat memory, we propose a
hierarchical alternative that stores features at different semantic levels. We
introduce a hierarchical prototype model, where each level of the prototype
fetches corresponding information from the hierarchical memory. The model is
endowed with the ability to flexibly rely on features at different semantic
levels if the domain shift circumstances so demand. We meta-learn the model by
a newly derived hierarchical variational inference framework, where
hierarchical memory and prototypes are jointly optimized. To explore and
exploit the importance of different semantic levels, we further propose to
learn the weights associated with the prototype at each level in a data-driven
way, which enables the model to adaptively choose the most generalizable
features. We conduct thorough ablation studies to demonstrate the effectiveness
of each component in our model. The new state-of-the-art performance on
cross-domain and competitive performance on traditional few-shot classification
further substantiates the benefit of hierarchical variational memory.
Related papers
- Benchmarking Hebbian learning rules for associative memory [0.0]
Associative memory is a key concept in cognitive and computational brain science.
We benchmark six different learning rules on storage capacity and prototype extraction.
arXiv Detail & Related papers (2023-12-30T21:49:47Z) - Submodel Partitioning in Hierarchical Federated Learning: Algorithm
Design and Convergence Analysis [15.311309249848739]
Hierarchical learning (FL) has demonstrated promising scalability advantages over the traditional "star-topology" architecture-based federated learning (FL)
In this paper, we propose independent sub training overconstrained Internet of Things (IoT)
Key idea behind HIST is a global version of model computation, where we partition the global model into disjoint submodels in each round, and distribute them across different cells.
arXiv Detail & Related papers (2023-10-27T04:42:59Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Fine-grained Classification via Categorical Memory Networks [42.413523046712896]
We present a class-specific memory module for fine-grained feature learning.
The memory module stores the prototypical feature representation for each category as a moving average.
We integrate our class-specific memory module into a standard convolutional neural network, yielding a Categorical Memory Network.
arXiv Detail & Related papers (2020-12-12T11:50:13Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.