On the Exploration of Incremental Learning for Fine-grained Image
Retrieval
- URL: http://arxiv.org/abs/2010.08020v1
- Date: Thu, 15 Oct 2020 21:07:44 GMT
- Title: On the Exploration of Incremental Learning for Fine-grained Image
Retrieval
- Authors: Wei Chen and Yu Liu and Weiping Wang and Tinne Tuytelaars and Erwin M.
Bakker and Michael Lew
- Abstract summary: We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
- Score: 45.48333682748607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of fine-grained image retrieval in an
incremental setting, when new categories are added over time. On the one hand,
repeatedly training the representation on the extended dataset is
time-consuming. On the other hand, fine-tuning the learned representation only
with the new classes leads to catastrophic forgetting. To this end, we propose
an incremental learning method to mitigate retrieval performance degradation
caused by the forgetting issue. Without accessing any samples of the original
classes, the classifier of the original network provides soft "labels" to
transfer knowledge to train the adaptive network, so as to preserve the
previous capability for classification. More importantly, a regularization
function based on Maximum Mean Discrepancy is devised to minimize the
discrepancy of new classes features from the original network and the adaptive
network, respectively. Extensive experiments on two datasets show that our
method effectively mitigates the catastrophic forgetting on the original
classes while achieving high performance on the new classes.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Adaptive Cross Batch Normalization for Metric Learning [75.91093210956116]
Metric learning is a fundamental problem in computer vision.
We show that it is equally important to ensure that the accumulated embeddings are up to date.
In particular, it is necessary to circumvent the representational drift between the accumulated embeddings and the feature embeddings at the current training iteration.
arXiv Detail & Related papers (2023-03-30T03:22:52Z) - Informative regularization for a multi-layer perceptron RR Lyrae
classifier under data shift [3.303002683812084]
We propose a scalable and easily adaptable approach based on an informative regularization and an ad-hoc training procedure to mitigate the shift problem.
Our method provides a new path to incorporate knowledge from characteristic features into artificial neural networks to manage the underlying data shift problem.
arXiv Detail & Related papers (2023-03-12T02:49:19Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - Improving Replay-Based Continual Semantic Segmentation with Smart Data
Selection [0.0]
We investigate the influences of various replay strategies for semantic segmentation and evaluate them in class- and domain-incremental settings.
Our findings suggest that in a class-incremental setting, it is critical to achieve a uniform distribution for the different classes in the buffer.
In the domain-incremental setting, it is most effective to select buffer samples by uniformly sampling from the distribution of learned feature representations or by choosing samples with median entropy.
arXiv Detail & Related papers (2022-09-20T16:32:06Z) - New Insights on Reducing Abrupt Representation Change in Online
Continual Learning [69.05515249097208]
We focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream.
We show that applying Experience Replay causes the newly added classes' representations to overlap significantly with the previous classes.
We propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes.
arXiv Detail & Related papers (2022-03-08T01:37:00Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.