A denoised Mean Teacher for domain adaptive point cloud registration
- URL: http://arxiv.org/abs/2306.14749v2
- Date: Tue, 4 Jul 2023 10:18:23 GMT
- Title: A denoised Mean Teacher for domain adaptive point cloud registration
- Authors: Alexander Bigalke, Mattias P. Heinrich
- Abstract summary: Point cloud-based medical registration promises increased computational efficiency, robustness to intensity shifts, and anonymity preservation.
Supervised training on synthetic deformations is an alternative but, in turn, suffers from the domain gap to the real domain.
We present a denoised teacher-student paradigm for point cloud registration, comprising two complementary denoising strategies.
- Score: 81.43344461130474
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Point cloud-based medical registration promises increased computational
efficiency, robustness to intensity shifts, and anonymity preservation but is
limited by the inefficacy of unsupervised learning with similarity metrics.
Supervised training on synthetic deformations is an alternative but, in turn,
suffers from the domain gap to the real domain. In this work, we aim to tackle
this gap through domain adaptation. Self-training with the Mean Teacher is an
established approach to this problem but is impaired by the inherent noise of
the pseudo labels from the teacher. As a remedy, we present a denoised
teacher-student paradigm for point cloud registration, comprising two
complementary denoising strategies. First, we propose to filter pseudo labels
based on the Chamfer distances of teacher and student registrations, thus
preventing detrimental supervision by the teacher. Second, we make the teacher
dynamically synthesize novel training pairs with noise-free labels by warping
its moving inputs with the predicted deformations. Evaluation is performed for
inhale-to-exhale registration of lung vessel trees on the public PVT dataset
under two domain shifts. Our method surpasses the baseline Mean Teacher by
13.5/62.8%, consistently outperforms diverse competitors, and sets a new
state-of-the-art accuracy (TRE=2.31mm). Code is available at
https://github.com/multimodallearning/denoised_mt_pcd_reg.
Related papers
- Dynamic Retraining-Updating Mean Teacher for Source-Free Object Detection [8.334498654271371]
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
This study focuses on source-free object detection (SFOD), which adapts a source-trained detector to an unlabeled target domain without using labeled source data.
arXiv Detail & Related papers (2024-07-23T14:12:57Z) - Improving the Robustness of Distantly-Supervised Named Entity Recognition via Uncertainty-Aware Teacher Learning and Student-Student Collaborative Learning [24.733773208117363]
We propose Uncertainty-Aware Teacher Learning to reduce the number of incorrect pseudo labels in the self-training stage.
We also propose Student-Student Collaborative Learning that allows the transfer of reliable labels between two student networks.
We evaluate our proposed method on five DS-NER datasets, demonstrating that our method is superior to the state-of-the-art DS-NER methods.
arXiv Detail & Related papers (2023-11-14T09:09:58Z) - Contrastive Mean Teacher for Domain Adaptive Object Detectors [20.06919799819326]
Mean-teacher self-training is a powerful paradigm in unsupervised domain adaptation for object detection, but it struggles with low-quality pseudo-labels.
We propose Contrastive Mean Teacher (CMT) -- a unified, general-purpose framework with the two paradigms naturally integrated to maximize beneficial learning signals.
CMT leads to new state-of-the-art target-domain performance: 51.9% mAP on Foggy Cityscapes, outperforming the previously best by 2.1% mAP.
arXiv Detail & Related papers (2023-05-04T17:55:17Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - ADPS: Asymmetric Distillation Post-Segmentation for Image Anomaly
Detection [75.68023968735523]
Knowledge Distillation-based Anomaly Detection (KDAD) methods rely on the teacher-student paradigm to detect and segment anomalous regions.
We propose an innovative approach called Asymmetric Distillation Post-Segmentation (ADPS)
Our ADPS employs an asymmetric distillation paradigm that takes distinct forms of the same image as the input of the teacher-student networks.
We show that ADPS significantly improves Average Precision (AP) metric by 9% and 20% on the MVTec AD and KolektorSDD2 datasets.
arXiv Detail & Related papers (2022-10-19T12:04:47Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Target and Task specific Source-Free Domain Adaptive Image Segmentation [73.78898054277538]
We propose a two-stage approach for source-free domain adaptive image segmentation.
We focus on generating target-specific pseudo labels while suppressing high entropy regions.
In the second stage, we focus on adapting the network for task-specific representation.
arXiv Detail & Related papers (2022-03-29T17:50:22Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.