Deep Learning in Medical Image Registration: Magic or Mirage?
- URL: http://arxiv.org/abs/2408.05839v2
- Date: Fri, 27 Sep 2024 20:33:10 GMT
- Title: Deep Learning in Medical Image Registration: Magic or Mirage?
- Authors: Rohit Jena, Deeksha Sethi, Pratik Chaudhari, James C. Gee,
- Abstract summary: We make an explicit correspondence between the distribution of per-pixel intensity and labels, and the performance of classical registration methods.
We show that learning-based methods with weak supervision can perform high-fidelity intensity and label registration, which is not possible with classical methods.
- Score: 18.620739011646123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical optimization and learning-based methods are the two reigning paradigms in deformable image registration. While optimization-based methods boast generalizability across modalities and robust performance, learning-based methods promise peak performance, incorporating weak supervision and amortized optimization. However, the exact conditions for either paradigm to perform well over the other are shrouded and not explicitly outlined in the existing literature. In this paper, we make an explicit correspondence between the mutual information of the distribution of per-pixel intensity and labels, and the performance of classical registration methods. This strong correlation hints to the fact that architectural designs in learning-based methods is unlikely to affect this correlation, and therefore, the performance of learning-based methods. This hypothesis is thoroughly validated with state-of-the-art classical and learning-based methods. However, learning-based methods with weak supervision can perform high-fidelity intensity and label registration, which is not possible with classical methods. Next, we show that this high-fidelity feature learning does not translate to invariance to domain shift, and learning-based methods are sensitive to such changes in the data distribution. Finally, we propose a general recipe to choose the best paradigm for a given registration problem, based on these observations.
Related papers
- Latent Anomaly Detection Through Density Matrices [3.843839245375552]
This paper introduces a novel anomaly detection framework that combines the robust statistical principles of density-estimation-based anomaly detection methods with the representation-learning capabilities of deep learning models.
The method originated from this framework is presented in two different versions: a shallow approach and a deep approach that integrates an autoencoder to learn a low-dimensional representation of the data.
arXiv Detail & Related papers (2024-08-14T15:44:51Z) - From Pretext to Purpose: Batch-Adaptive Self-Supervised Learning [32.18543787821028]
This paper proposes an adaptive technique of batch fusion for self-supervised contrastive learning.
It achieves state-of-the-art performance under equitable comparisons.
We suggest that the proposed method may contribute to the advancement of data-driven self-supervised learning research.
arXiv Detail & Related papers (2023-11-16T15:47:49Z) - OTMatch: Improving Semi-Supervised Learning with Optimal Transport [2.4355694259330467]
We present a new approach called OTMatch, which leverages semantic relationships among classes by employing an optimal transport loss function to match distributions.
The empirical results show improvements in our method above baseline, this demonstrates the effectiveness and superiority of our approach in harnessing semantic relationships to enhance learning performance in a semi-supervised setting.
arXiv Detail & Related papers (2023-10-26T15:01:54Z) - Learning Representations for New Sound Classes With Continual
Self-Supervised Learning [30.35061954854764]
We present a self-supervised learning framework for continually learning representations for new sound classes.
We show that representations learned with the proposed method generalize better and are less susceptible to catastrophic forgetting.
arXiv Detail & Related papers (2022-05-15T22:15:21Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Effect of Parameter Optimization on Classical and Learning-based Image
Matching Methods [10.014010310188821]
We compare classical and learning-based methods by employing mutual nearest neighbor search with ratio test and optimizing the ratio test threshold.
After a fair comparison, the experimental results on HPatches dataset reveal that the performance gap between classical and learning-based methods is not that significant.
A recent approach, DFM, which only uses pre-trained VGG features as descriptors and ratio test, is shown to outperform most of the well-trained learning-based methods.
arXiv Detail & Related papers (2021-08-18T14:45:32Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Robust Imitation Learning from Noisy Demonstrations [81.67837507534001]
We show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss.
We propose a new imitation learning method that effectively combines pseudo-labeling with co-training.
Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-20T10:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.