Unsupervised 3D registration through optimization-guided cyclical
self-training
- URL: http://arxiv.org/abs/2306.16997v2
- Date: Thu, 20 Jul 2023 07:29:03 GMT
- Title: Unsupervised 3D registration through optimization-guided cyclical
self-training
- Authors: Alexander Bigalke, Lasse Hansen, Tony C. W. Mok, Mattias P. Heinrich
- Abstract summary: State-of-the-art deep learning-based registration methods employ three different learning strategies.
We propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training.
We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors.
- Score: 71.75057371518093
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: State-of-the-art deep learning-based registration methods employ three
different learning strategies: supervised learning, which requires costly
manual annotations, unsupervised learning, which heavily relies on hand-crafted
similarity metrics designed by domain experts, or learning from synthetic data,
which introduces a domain shift. To overcome the limitations of these
strategies, we propose a novel self-supervised learning paradigm for
unsupervised registration, relying on self-training. Our idea is based on two
key insights. Feature-based differentiable optimizers 1) perform reasonable
registration even from random features and 2) stabilize the training of the
preceding feature extraction network on noisy labels. Consequently, we propose
cyclical self-training, where pseudo labels are initialized as the displacement
fields inferred from random features and cyclically updated based on more and
more expressive features from the learning feature extractor, yielding a
self-reinforcement effect. We evaluate the method for abdomen and lung
registration, consistently surpassing metric-based supervision and
outperforming diverse state-of-the-art competitors. Source code is available at
https://github.com/multimodallearning/reg-cyclical-self-train.
Related papers
- Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs [73.74375912785689]
This paper proposes unified training strategies for speech recognition systems.
We demonstrate that training a single model for all three tasks enhances VSR and AVSR performance.
We also introduce a greedy pseudo-labelling approach to more effectively leverage unlabelled samples.
arXiv Detail & Related papers (2024-11-04T16:46:53Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Self-supervised Auxiliary Loss for Metric Learning in Music
Similarity-based Retrieval and Auto-tagging [0.0]
We propose a model that builds on the self-supervised learning approach to address the similarity-based retrieval challenge.
We also found that refraining from employing augmentation during the fine-tuning phase yields better results.
arXiv Detail & Related papers (2023-04-15T02:00:28Z) - Toward Open-domain Slot Filling via Self-supervised Co-training [2.7178968279054936]
Slot filling is one of the critical tasks in modern conversational systems.
We propose a Self-supervised Co-training framework, called SCot, that requires zero in-domain manually labeled training examples.
Our evaluations show that SCot outperforms state-of-the-art models by 45.57% and 37.56% on SGD and MultiWoZ datasets.
arXiv Detail & Related papers (2023-03-24T04:51:22Z) - Active Self-Training for Weakly Supervised 3D Scene Semantic
Segmentation [17.27850877649498]
We introduce a method for weakly supervised segmentation of 3D scenes that combines self-training and active learning.
We demonstrate that our approach leads to an effective method that provides improvements in scene segmentation over previous works and baselines.
arXiv Detail & Related papers (2022-09-15T06:00:25Z) - Distantly-Supervised Named Entity Recognition with Noise-Robust Learning
and Language Model Augmented Self-Training [66.80558875393565]
We study the problem of training named entity recognition (NER) models using only distantly-labeled data.
We propose a noise-robust learning scheme comprised of a new loss function and a noisy label removal step.
Our method achieves superior performance, outperforming existing distantly-supervised NER models by significant margins.
arXiv Detail & Related papers (2021-09-10T17:19:56Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Episodic Self-Imitation Learning with Hindsight [7.743320290728377]
Episodic self-imitation learning is a novel self-imitation algorithm with a trajectory selection module and an adaptive loss function.
A selection module is introduced to filter uninformative samples from each episode of the update.
Episodic self-imitation learning has the potential to be applied to real-world problems that have continuous action spaces.
arXiv Detail & Related papers (2020-11-26T20:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.