Double-Uncertainty Assisted Spatial and Temporal Regularization
Weighting for Learning-based Registration
- URL: http://arxiv.org/abs/2107.02433v1
- Date: Tue, 6 Jul 2021 07:19:49 GMT
- Title: Double-Uncertainty Assisted Spatial and Temporal Regularization
Weighting for Learning-based Registration
- Authors: Zhe Xu, Jie Luo, Donghuan Lu, Jiangpeng Yan, Jayender Jagadeesan,
William Wells III, Sarah Frisken, Kai Ma, Yefeng Zheng, Raymond Kai-yu Tong
- Abstract summary: We propose a mean-teacher based registration framework.
This framework incorporates an additional textittemporal regularization term by encouraging the teacher model's temporal ensemble prediction.
At each training step, it also automatically adjusts the weights of the textitspatial regularization and the textittemporal regularization
- Score: 24.845259459450666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to tackle the difficulty associated with the ill-posed nature of the
image registration problem, researchers use regularization to constrain the
solution space. For most learning-based registration approaches, the
regularization usually has a fixed weight and only constrains the spatial
transformation. Such convention has two limitations: (1) The regularization
strength of a specific image pair should be associated with the content of the
images, thus the ``one value fits all'' scheme is not ideal; (2) Only spatially
regularizing the transformation (but overlooking the temporal consistency of
different estimations) may not be the best strategy to cope with the
ill-posedness. In this study, we propose a mean-teacher based registration
framework. This framework incorporates an additional \textit{temporal
regularization} term by encouraging the teacher model's temporal ensemble
prediction to be consistent with that of the student model. At each training
step, it also automatically adjusts the weights of the \textit{spatial
regularization} and the \textit{temporal regularization} by taking account of
the transformation uncertainty and appearance uncertainty derived from the
perturbed teacher model. We perform experiments on multi- and uni-modal
registration tasks, and the results show that our strategy outperforms the
traditional and learning-based benchmark methods.
Related papers
- UniReg: Foundation Model for Controllable Medical Image Registration [16.19173225107947]
Learning-based registration approaches lack generalizability across diverse clinical scenarios.
We propose textbfUniReg, the first interactive foundation model for medical image registration.
Our key innovation is a unified framework for diverse registration scenarios, achieved through a conditional deformation field estimation.
arXiv Detail & Related papers (2025-03-17T06:55:01Z) - From Model Based to Learned Regularization in Medical Image Registration: A Comprehensive Review [10.985967613049269]
Regularization is a key component in driving the solution toward anatomically meaningful deformations.
Regularization is often overlooked or addressed with default approaches, assuming existing methods are sufficient.
This review introduces a novel taxonomy that systematically categorizes the diverse range of proposed regularization methods.
arXiv Detail & Related papers (2024-12-20T10:00:36Z) - On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds [11.30047438005394]
This work investigates the question of how to choose the regularization norm $lVert cdot rVert$ in the context of high-dimensional adversarial training for binary classification.
We quantitatively characterize the relationship between perturbation size and the optimal choice of $lVert cdot rVert$, confirming the intuition that, in the data scarce regime, the type of regularization becomes increasingly important for adversarial training as perturbations grow in size.
arXiv Detail & Related papers (2024-10-21T14:53:12Z) - Regularized Neural Ensemblers [55.15643209328513]
In this study, we explore employing regularized neural networks as ensemble methods.<n>Motivated by the risk of learning low-diversity ensembles, we propose regularizing the ensembling model by randomly dropping base model predictions.<n>We demonstrate this approach provides lower bounds for the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Regularization for Adversarial Robust Learning [18.46110328123008]
We develop a novel approach to adversarial training that integrates $phi$-divergence regularization into the distributionally robust risk function.
This regularization brings a notable improvement in computation compared with the original formulation.
We validate our proposed method in supervised learning, reinforcement learning, and contextual learning and showcase its state-of-the-art performance against various adversarial attacks.
arXiv Detail & Related papers (2024-08-19T03:15:41Z) - Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation [56.1776710527814]
Weakly Incremental Learning for Semantic (WILSS) leverages a pre-trained segmentation model to segment new classes using cost-effective and readily available image-level labels.
A prevailing way to solve WILSS is the generation of seed areas for each new class, serving as a form of pixel-level supervision.
We propose an innovative, tendency-driven relationship of mutual exclusivity, meticulously tailored to govern the behavior of the seed areas.
arXiv Detail & Related papers (2024-04-18T08:23:24Z) - Deformable Image Registration with Stochastically Regularized
Biomechanical Equilibrium [0.0]
This study introduces a regularization strategy that does not require discretization, making it compatible with current registration frameworks.
The proposed method performs favorably in both synthetic and real datasets, exhibiting an accuracy comparable to current state-of-the-art methods.
arXiv Detail & Related papers (2023-12-22T08:16:47Z) - Semi-supervised Semantic Segmentation Meets Masked Modeling:Fine-grained
Locality Learning Matters in Consistency Regularization [31.333862320143968]
Semi-supervised semantic segmentation aims to utilize limited labeled images and abundant unlabeled images to achieve label-efficient learning.
We propose a novel framework called textttMaskMatch, which enables fine-grained locality learning to achieve better dense segmentation.
arXiv Detail & Related papers (2023-12-14T03:28:53Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency
Regularization [26.542718087103665]
We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch.
We conduct experiments to demonstrate the effectiveness of our ConMatch over the latest methods and provide extensive ablation studies.
arXiv Detail & Related papers (2022-08-18T04:37:50Z) - Self-supervised Augmentation Consistency for Adapting Semantic
Segmentation [56.91850268635183]
We propose an approach to domain adaptation for semantic segmentation that is both practical and highly accurate.
We employ standard data augmentation techniques $-$ photometric noise, flipping and scaling $-$ and ensure consistency of the semantic predictions.
We achieve significant improvements of the state-of-the-art segmentation accuracy after adaptation, consistent both across different choices of the backbone architecture and adaptation scenarios.
arXiv Detail & Related papers (2021-04-30T21:32:40Z) - Squared $\ell_2$ Norm as Consistency Loss for Leveraging Augmented Data
to Learn Robust and Invariant Representations [76.85274970052762]
Regularizing distance between embeddings/representations of original samples and augmented counterparts is a popular technique for improving robustness of neural networks.
In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings.
We show that the generic approach we identified (squared $ell$ regularized augmentation) outperforms several recent methods, which are each specially designed for one task.
arXiv Detail & Related papers (2020-11-25T22:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.