What, How, and When Should Object Detectors Update in Continually
Changing Test Domains?
- URL: http://arxiv.org/abs/2312.08875v1
- Date: Tue, 12 Dec 2023 07:13:08 GMT
- Title: What, How, and When Should Object Detectors Update in Continually
Changing Test Domains?
- Authors: Jayeon Yoo, Dongkwan Lee, Inseop Chung, Donghyun Kim, Nojun Kwak
- Abstract summary: Test-time adaptation algorithms have been proposed to adapt the model online while inferring test data.
We propose a novel online adaption approach for object detection in continually changing test domains.
Our approach surpasses baselines on widely used benchmarks, achieving improvements of up to 4.9%p and 7.9%p in mAP.
- Score: 34.13756022890991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is a well-known fact that the performance of deep learning models
deteriorates when they encounter a distribution shift at test time. Test-time
adaptation (TTA) algorithms have been proposed to adapt the model online while
inferring test data. However, existing research predominantly focuses on
classification tasks through the optimization of batch normalization layers or
classification heads, but this approach limits its applicability to various
model architectures like Transformers and makes it challenging to apply to
other tasks, such as object detection. In this paper, we propose a novel online
adaption approach for object detection in continually changing test domains,
considering which part of the model to update, how to update it, and when to
perform the update. By introducing architecture-agnostic and lightweight
adaptor modules and only updating these while leaving the pre-trained backbone
unchanged, we can rapidly adapt to new test domains in an efficient way and
prevent catastrophic forgetting. Furthermore, we present a practical and
straightforward class-wise feature aligning method for object detection to
resolve domain shifts. Additionally, we enhance efficiency by determining when
the model is sufficiently adapted or when additional adaptation is needed due
to changes in the test distribution. Our approach surpasses baselines on widely
used benchmarks, achieving improvements of up to 4.9\%p and 7.9\%p in mAP for
COCO $\rightarrow$ COCO-corrupted and SHIFT, respectively, while maintaining
about 20 FPS or higher.
Related papers
- Exploring Test-Time Adaptation for Object Detection in Continually Changing Environments [13.163784646113214]
Continual Test-Time Adaptation (CTTA) has recently emerged as a promising technique to gradually adapt a source-trained model to continually changing target domains.
We present CTAOD, featuring three core components. Firstly, the object-level contrastive learning module extracts object-level features for contrastive learning to refine the feature representation in the target domain.
Secondly, the adaptive monitoring module dynamically skips unnecessary adaptation and updates the category-specific threshold based on predicted confidence scores to enable efficiency and improve the quality of pseudo-labels.
arXiv Detail & Related papers (2024-06-24T08:30:03Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation [1.4530711901349282]
We propose to validate test-time adaptation methods using datasets for autonomous driving, namely CLAD-C and SHIFT.
We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift.
We enhance the well-established self-training framework by incorporating a small memory buffer to increase model stability.
arXiv Detail & Related papers (2023-09-18T19:34:23Z) - Better Practices for Domain Adaptation [62.70267990659201]
Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
arXiv Detail & Related papers (2023-09-07T17:44:18Z) - DomainAdaptor: A Novel Approach to Test-time Adaptation [33.770970763959355]
DomainAdaptor aims to adapt a trained CNN model to unseen domains during the test.
AdaMixBN addresses the domain shift by adaptively fusing training and test statistics in the normalization layer.
Experiments show that DomainAdaptor consistently outperforms the state-of-the-art methods on four benchmarks.
arXiv Detail & Related papers (2023-08-20T15:37:01Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.