POLAFFINI: Efficient feature-based polyaffine initialization for improved non-linear image registration
- URL: http://arxiv.org/abs/2407.03922v2
- Date: Tue, 9 Jul 2024 08:47:44 GMT
- Title: POLAFFINI: Efficient feature-based polyaffine initialization for improved non-linear image registration
- Authors: Antoine Legouhy, Ross Callaghan, Hojjat Azadbakht, Hui Zhang,
- Abstract summary: This paper presents an efficient feature-based approach to initialize non-linear image registration.
A good estimate of the initial transformation is essential, both for traditional iterative algorithms and for recent one-shot deep learning (DL)-based alternatives.
- Score: 2.6821469866843435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an efficient feature-based approach to initialize non-linear image registration. Today, nonlinear image registration is dominated by methods relying on intensity-based similarity measures. A good estimate of the initial transformation is essential, both for traditional iterative algorithms and for recent one-shot deep learning (DL)-based alternatives. The established approach to estimate this starting point is to perform affine registration, but this may be insufficient due to its parsimonious, global, and non-bending nature. We propose an improved initialization method that takes advantage of recent advances in DL-based segmentation techniques able to instantly estimate fine-grained regional delineations with state-of-the-art accuracies. Those segmentations are used to produce local, anatomically grounded, feature-based affine matchings using iteration-free closed-form expressions. Estimated local affine transformations are then fused, with the log-Euclidean polyaffine framework, into an overall dense diffeomorphic transformation. We show that, compared to its affine counterpart, the proposed initialization leads to significantly better alignment for both traditional and DL-based non-linear registration algorithms. The proposed approach is also more robust and significantly faster than commonly used affine registration algorithms such as FSL FLIRT.
Related papers
- Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment [81.84950252537618]
This paper reveals a unified game-theoretic connection between iterative BOND and self-play alignment.
We establish a novel framework, WIN rate Dominance (WIND), with a series of efficient algorithms for regularized win rate dominance optimization.
arXiv Detail & Related papers (2024-10-28T04:47:39Z) - Resource-Adaptive Newton's Method for Distributed Learning [16.588456212160928]
This paper introduces a novel and efficient algorithm called RANL, which overcomes the limitations of Newton's method.
Unlike traditional first-order methods, RANL exhibits remarkable independence from the condition number of the problem.
arXiv Detail & Related papers (2023-08-20T04:01:30Z) - Fast Algorithms for Directed Graph Partitioning Using Flows and
Reweighted Eigenvalues [6.094384342913063]
We derive almost linear-time algorithms to achieve $O(sqrtlogn)$-approximation and Cheeger-type guarantee for directed edge expansion.
This provides a primal-dual flow-based framework to obtain the best known algorithms for directed graph partitioning.
arXiv Detail & Related papers (2023-06-15T13:41:17Z) - BO-ICP: Initialization of Iterative Closest Point Based on Bayesian
Optimization [3.248584983235657]
We present a new method based on Bayesian optimization for finding the critical initial ICP transform.
We show that our approach outperforms state-of-the-art methods when given similar computation time.
It is compatible with other improvements to ICP, as it focuses solely on the selection of an initial transform.
arXiv Detail & Related papers (2023-04-25T19:38:53Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - $\texttt{GradICON}$: Approximate Diffeomorphisms via Gradient Inverse
Consistency [16.72466200341455]
We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images.
We achieve state-of-the-art registration performance on a variety of real-world medical image datasets.
arXiv Detail & Related papers (2022-06-13T04:03:49Z) - Affine Medical Image Registration with Coarse-to-Fine Vision Transformer [11.4219428942199]
We present a learning-based algorithm, Coarse-to-Fine Vision Transformer (C2FViT), for 3D affine medical image registration.
Our method is superior to the existing CNNs-based affine registration methods in terms of registration accuracy, robustness and generalizability.
arXiv Detail & Related papers (2022-03-29T03:18:43Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - End-to-end Interpretable Learning of Non-blind Image Deblurring [102.75982704671029]
Non-blind image deblurring is typically formulated as a linear least-squares problem regularized by natural priors on the corresponding sharp picture's gradients.
We propose to precondition the Richardson solver using approximate inverse filters of the (known) blur and natural image prior kernels.
arXiv Detail & Related papers (2020-07-03T15:45:01Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.