Deep learning based geometric registration for medical images: How
accurate can we get without visual features?
- URL: http://arxiv.org/abs/2103.00885v1
- Date: Mon, 1 Mar 2021 10:15:47 GMT
- Title: Deep learning based geometric registration for medical images: How
accurate can we get without visual features?
- Authors: Lasse Hansen and Mattias P. Heinrich
- Abstract summary: Deep learning is driving the development of new approaches for image registration.
In this work we take a look at an exactly opposite approach by investigating a deep learning framework for registration based solely on geometric features and optimisation.
Our experimental validation is conducted on complex key-point graphs of inner lung structures, strongly outperforming dense encoder-decoder networks and other point set registration methods.
- Score: 5.05806585671215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As in other areas of medical image analysis, e.g. semantic segmentation, deep
learning is currently driving the development of new approaches for image
registration. Multi-scale encoder-decoder network architectures achieve
state-of-the-art accuracy on tasks such as intra-patient alignment of abdominal
CT or brain MRI registration, especially when additional supervision, such as
anatomical labels, is available. The success of these methods relies to a large
extent on the outstanding ability of deep CNNs to extract descriptive visual
features from the input images. In contrast to conventional methods, the
explicit inclusion of geometric information plays only a minor role, if at all.
In this work we take a look at an exactly opposite approach by investigating a
deep learning framework for registration based solely on geometric features and
optimisation. We combine graph convolutions with loopy belief message passing
to enable highly accurate 3D point cloud registration. Our experimental
validation is conducted on complex key-point graphs of inner lung structures,
strongly outperforming dense encoder-decoder networks and other point set
registration methods. Our code is publicly available at
https://github.com/multimodallearning/deep-geo-reg.
Related papers
- Deep Homography Estimation for Visual Place Recognition [49.235432979736395]
We propose a transformer-based deep homography estimation (DHE) network.
It takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification.
Experiments on benchmark datasets show that our method can outperform several state-of-the-art methods.
arXiv Detail & Related papers (2024-02-25T13:22:17Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Anatomy-aware and acquisition-agnostic joint registration with SynthMorph [6.017634371712142]
Affine image registration is a cornerstone of medical image analysis.
Deep-learning (DL) methods learn a function that maps an image pair to an output transform.
Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image.
We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image.
arXiv Detail & Related papers (2023-01-26T18:59:33Z) - Prediction of Geometric Transformation on Cardiac MRI via Convolutional
Neural Network [13.01021780124613]
We propose to learn features in medical images by training ConvNets to recognize the geometric transformation applied to images.
We present a simple self-supervised task that can easily predict the geometric transformation.
arXiv Detail & Related papers (2022-11-12T11:29:14Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition [55.38462937452363]
We propose a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student.
By pair-wise aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification.
arXiv Detail & Related papers (2022-07-07T07:23:20Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - DeepFLASH: An Efficient Network for Learning-based Medical Image
Registration [8.781861951759948]
DeepFLASH is a novel network with efficient training and inference for learning-based medical image registration.
We demonstrate our algorithm in two different applications of image registration: 2D synthetic data and 3D real brain magnetic resonance (MR) images.
arXiv Detail & Related papers (2020-04-05T05:17:07Z) - SAUNet: Shape Attentive U-Net for Interpretable Medical Image
Segmentation [2.6837973648527926]
We present a new architecture called Shape Attentive U-Net (SAUNet) which focuses on model interpretability and robustness.
Our method achieves state-of-the-art results on the two large public cardiac MRI image segmentation datasets of SUN09 and AC17.
arXiv Detail & Related papers (2020-01-21T16:48:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.