Weakly Supervised Test-Time Domain Adaptation for Object Detection
- URL: http://arxiv.org/abs/2407.05607v1
- Date: Mon, 8 Jul 2024 04:44:42 GMT
- Title: Weakly Supervised Test-Time Domain Adaptation for Object Detection
- Authors: Anh-Dzung Doan, Bach Long Nguyen, Terry Lim, Madhuka Jayawardhana, Surabhi Gupta, Christophe Guettier, Ian Reid, Markus Wagner, Tat-Jun Chin,
- Abstract summary: In some applications such as surveillance, there is usually a human operator overseeing the system's operation.
We propose to involve the operator in test-time domain adaptation to raise the performance of object detection beyond what is achievable by fully automated adaptation.
We show that the proposed method outperforms existing works, demonstrating a great benefit of human-in-the-loop test-time domain adaptation.
- Score: 23.89166024655107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior to deployment, an object detector is trained on a dataset compiled from a previous data collection campaign. However, the environment in which the object detector is deployed will invariably evolve, particularly in outdoor settings where changes in lighting, weather and seasons will significantly affect the appearance of the scene and target objects. It is almost impossible for all potential scenarios that the object detector may come across to be present in a finite training dataset. This necessitates continuous updates to the object detector to maintain satisfactory performance. Test-time domain adaptation techniques enable machine learning models to self-adapt based on the distributions of the testing data. However, existing methods mainly focus on fully automated adaptation, which makes sense for applications such as self-driving cars. Despite the prevalence of fully automated approaches, in some applications such as surveillance, there is usually a human operator overseeing the system's operation. We propose to involve the operator in test-time domain adaptation to raise the performance of object detection beyond what is achievable by fully automated adaptation. To reduce manual effort, the proposed method only requires the operator to provide weak labels, which are then used to guide the adaptation process. Furthermore, the proposed method can be performed in a streaming setting, where each online sample is observed only once. We show that the proposed method outperforms existing works, demonstrating a great benefit of human-in-the-loop test-time domain adaptation. Our code is publicly available at https://github.com/dzungdoan6/WSTTA
Related papers
- DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - STFAR: Improving Object Detection Robustness at Test-Time by
Self-Training with Feature Alignment Regularization [35.16122933158808]
Domain adaptation helps generalizing object detection models to target domain data with distribution shift.
We explore adapting an object detection model at test-time, a.k.a. test-time adaptation (TTAOD)
Our proposed method sets the state-of-the-art on test-time adaptive object detection task.
arXiv Detail & Related papers (2023-03-31T10:04:44Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Domain Adaptive Object Detection for Autonomous Driving under Foggy
Weather [25.964194141706923]
This paper proposes a novel domain adaptive object detection framework for autonomous driving under foggy weather.
Our method leverages both image-level and object-level adaptation to diminish the domain discrepancy in image style and object appearance.
Experimental results on public benchmarks show the effectiveness and accuracy of the proposed method.
arXiv Detail & Related papers (2022-10-27T05:09:10Z) - Interactron: Embodied Adaptive Object Detection [18.644357684104662]
We propose Interactron, a method for adaptive object detection in an interactive setting.
Our idea is to continue training during inference and adapt the model at test time without any explicit supervision via interacting with the environment.
arXiv Detail & Related papers (2022-02-01T18:56:14Z) - Self-Supervision & Meta-Learning for One-Shot Unsupervised Cross-Domain
Detection [0.0]
We present an object detection algorithm able to perform unsupervised adaptation across domains by using only one target sample, seen at test time.
We exploit meta-learning to simulate single-sample cross domain learning episodes and better align to the test condition.
arXiv Detail & Related papers (2021-06-07T10:33:04Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - Multi-Target Domain Adaptation via Unsupervised Domain Classification
for Weather Invariant Object Detection [1.773576418078547]
The performance of an object detector significantly degrades if the weather of the training images is different from that of test images.
We propose a novel unsupervised domain classification method which can be used to generalize single-target domain adaptation methods to multi-target domains.
We conduct the experiments on Cityscapes dataset and its synthetic variants, i.e. foggy, rainy, and night.
arXiv Detail & Related papers (2021-03-25T16:59:35Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.