INSPIRE: Intensity and Spatial Information-Based Deformable Image
Registration
- URL: http://arxiv.org/abs/2012.07208v1
- Date: Mon, 14 Dec 2020 01:51:59 GMT
- Title: INSPIRE: Intensity and Spatial Information-Based Deformable Image
Registration
- Authors: Johan \"Ofverstedt, Joakim Lindblad, Nata\v{s}a Sladoje
- Abstract summary: INSPIRE is a top-performing general-purpose method for deformable image registration.
We show that the proposed method delivers both highly accurate as well as stable and robust registration results.
We also evaluate the method on four benchmark datasets of 3D images of brains, for a total of 2088 pairwise registrations.
- Score: 3.584984184069584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present INSPIRE, a top-performing general-purpose method for deformable
image registration. INSPIRE extends our existing symmetric registration
framework based on distances combining intensity and spatial information to an
elastic B-splines based transformation model. We also present several
theoretical and algorithmic improvements which provide high computational
efficiency and thereby applicability of the framework in a wide range of real
scenarios. We show that the proposed method delivers both highly accurate as
well as stable and robust registration results. We evaluate the method on a
synthetic dataset created from retinal images, consisting of thin networks of
vessels, where INSPIRE exhibits excellent performance, substantially
outperforming the reference methods. We also evaluate the method on four
benchmark datasets of 3D images of brains, for a total of 2088 pairwise
registrations; a comparison with 15 other state-of-the-art methods reveals that
INSPIRE provides the best overall performance. Code is available at
github.com/MIDA-group/inspire.
Related papers
- OrientDream: Streamlining Text-to-3D Generation with Explicit Orientation Control [66.03885917320189]
OrientDream is a camera orientation conditioned framework for efficient and multi-view consistent 3D generation from textual prompts.
Our strategy emphasizes the implementation of an explicit camera orientation conditioned feature in the pre-training of a 2D text-to-image diffusion module.
Our experiments reveal that our method not only produces high-quality NeRF models with consistent multi-view properties but also achieves an optimization speed significantly greater than existing methods.
arXiv Detail & Related papers (2024-06-14T13:16:18Z) - ConKeD++ -- Improving descriptor learning for retinal image registration: A comprehensive study of contrastive losses [6.618504904743609]
We propose to test and improve a state-of-the-art framework for color fundus image registration, ConKeD.
Using the ConKeD framework we test multiple loss functions, adapting them to the framework and the application domain.
Our work demonstrates state-of-the-art performance across all datasets and metrics.
arXiv Detail & Related papers (2024-04-25T17:24:35Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - Embedded Feature Similarity Optimization with Specific Parameter
Initialization for 2D/3D Medical Image Registration [4.533408985664949]
We present a novel deep learning-based framework for medical image registration.
The proposed framework takes extracting multi-scale features into consideration using a novel composite connection with special training techniques.
Our experiments demonstrate that the method in this paper has improved the registration performance, and thereby outperforms the existing methods in terms of accuracy and running time.
arXiv Detail & Related papers (2023-05-10T15:33:15Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Deformable Image Registration using Neural ODEs [15.245085400790002]
We present a generic, fast, and accurate diffeomorphic image registration framework that leverages neural ordinary differential equations (NODEs)
Compared with traditional optimization-based methods, our framework reduces the running time from tens of minutes to tens of seconds.
Our experiments show that the registration results of our method outperform state-of-the-arts under various metrics.
arXiv Detail & Related papers (2021-08-07T12:54:17Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Cascaded Feature Warping Network for Unsupervised Medical Image
Registration [11.052668687673998]
We pre-sent a cascaded feature warping network to perform the coarse-to-fine registration.
A shared-weights encoder network is adopted to generate the feature pyramids for the unaligned images.
The results show that our method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-03-15T08:50:06Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.