TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation
- URL: http://arxiv.org/abs/2303.16730v1
- Date: Wed, 29 Mar 2023 14:34:54 GMT
- Title: TTA-COPE: Test-Time Adaptation for Category-Level Object Pose Estimation
- Authors: Taeyeop Lee, Jonathan Tremblay, Valts Blukis, Bowen Wen, Byeong-Uk
Lee, Inkyu Shin, Stan Birchfield, In So Kweon, Kuk-Jin Yoon
- Abstract summary: We propose a method of test-time adaptation for category-level object pose estimation called TTA-COPE.
We design a pose ensemble approach with a self-training loss using pose-aware confidence.
Our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime.
- Score: 86.80589902825196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Test-time adaptation methods have been gaining attention recently as a
practical solution for addressing source-to-target domain gaps by gradually
updating the model without requiring labels on the target data. In this paper,
we propose a method of test-time adaptation for category-level object pose
estimation called TTA-COPE. We design a pose ensemble approach with a
self-training loss using pose-aware confidence. Unlike previous unsupervised
domain adaptation methods for category-level object pose estimation, our
approach processes the test data in a sequential, online manner, and it does
not require access to the source domain at runtime. Extensive experimental
results demonstrate that the proposed pose ensemble and the self-training loss
improve category-level object pose performance during test time under both
semi-supervised and unsupervised settings. Project page:
https://taeyeop.com/ttacope
Related papers
- Adaptive Cascading Network for Continual Test-Time Adaptation [12.718826132518577]
We study the problem of continual test-time adaption where the goal is to adapt a source pre-trained model to a sequence of unlabelled target domains at test time.
Existing methods on test-time training suffer from several limitations.
arXiv Detail & Related papers (2024-07-17T01:12:57Z) - What, How, and When Should Object Detectors Update in Continually
Changing Test Domains? [34.13756022890991]
Test-time adaptation algorithms have been proposed to adapt the model online while inferring test data.
We propose a novel online adaption approach for object detection in continually changing test domains.
Our approach surpasses baselines on widely used benchmarks, achieving improvements of up to 4.9%p and 7.9%p in mAP.
arXiv Detail & Related papers (2023-12-12T07:13:08Z) - GenPose: Generative Category-level Object Pose Estimation via Diffusion
Models [5.1998359768382905]
We propose a novel solution by reframing categorylevel object pose estimation as conditional generative modeling.
Our approach achieves state-of-the-art performance on the REAL275 dataset, surpassing 50% and 60% on strict 5d2cm and 5d5cm metrics.
arXiv Detail & Related papers (2023-06-18T11:45:42Z) - PoseMatcher: One-shot 6D Object Pose Estimation by Deep Feature Matching [51.142988196855484]
We propose PoseMatcher, an accurate model free one-shot object pose estimator.
We create a new training pipeline for object to image matching based on a three-view system.
To enable PoseMatcher to attend to distinct input modalities, an image and a pointcloud, we introduce IO-Layer.
arXiv Detail & Related papers (2023-04-03T21:14:59Z) - STFAR: Improving Object Detection Robustness at Test-Time by
Self-Training with Feature Alignment Regularization [35.16122933158808]
Domain adaptation helps generalizing object detection models to target domain data with distribution shift.
We explore adapting an object detection model at test-time, a.k.a. test-time adaptation (TTAOD)
Our proposed method sets the state-of-the-art on test-time adaptive object detection task.
arXiv Detail & Related papers (2023-03-31T10:04:44Z) - CATRE: Iterative Point Clouds Alignment for Category-level Object Pose
Refinement [52.41884119329864]
Category-level object pose and size refiner CATRE is able to iteratively enhance pose estimate from point clouds to produce accurate results.
Our approach remarkably outperforms state-of-the-art methods on REAL275, CAMERA25, and LM benchmarks up to a speed of 85.32Hz.
arXiv Detail & Related papers (2022-07-17T05:55:00Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Interactron: Embodied Adaptive Object Detection [18.644357684104662]
We propose Interactron, a method for adaptive object detection in an interactive setting.
Our idea is to continue training during inference and adapt the model at test time without any explicit supervision via interacting with the environment.
arXiv Detail & Related papers (2022-02-01T18:56:14Z) - UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose
Estimation [84.16372642822495]
We propose an unsupervised domain adaptation (UDA) for category-level object pose estimation, called textbfUDA-COPE.
Inspired by the recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain labels.
arXiv Detail & Related papers (2021-11-24T16:00:48Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.