Unsupervised MMRegNet based on Spatially Encoded Gradient Information
- URL: http://arxiv.org/abs/2105.07392v1
- Date: Sun, 16 May 2021 09:47:42 GMT
- Title: Unsupervised MMRegNet based on Spatially Encoded Gradient Information
- Authors: Wangbin Ding, Lei Li, Xiahai Zhuang, Liqin Huang
- Abstract summary: Multi-modality medical images can provide relevant and complementary anatomical information for a target (organ, tumor or tissue)
It is still challenging to develop a multi-modality registration network due to the lack of robust criteria for network training.
In this work, we propose a multi-modality registration network (MMRegNet), which can jointly register multiple images with different modalities to a target image.
- Score: 16.355832135847276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modality medical images can provide relevant and complementary
anatomical information for a target (organ, tumor or tissue). Registering the
multi-modality images to a common space can fuse these comprehensive
information, and bring convenience for clinical application. Recently, neural
networks have been widely investigated to boost registration methods. However,
it is still challenging to develop a multi-modality registration network due to
the lack of robust criteria for network training. Besides, most existing
registration networks mainly focus on pairwise registration, and can hardly be
applicable for multiple image scenarios. In this work, we propose a
multi-modality registration network (MMRegNet), which can jointly register
multiple images with different modalities to a target image. Meanwhile, we
present spatially encoded gradient information to train the MMRegNet in an
unsupervised manner. The proposed network was evaluated on two datasets, i.e,
MM-WHS 2017 and CHAOS 2019. The results show that the proposed network can
achieve promising performance for cardiac left ventricle and liver registration
tasks. Source code is released publicly on github.
Related papers
- Large Language Models for Multimodal Deformable Image Registration [50.91473745610945]
We propose a novel coarse-to-fine MDIR framework,LLM-Morph, for aligning the deep features from different modal medical images.
Specifically, we first utilize a CNN encoder to extract deep visual features from cross-modal image pairs, then we use the first adapter to adjust these tokens, and use LoRA in pre-trained LLMs to fine-tune their weights.
Third, for the alignment of tokens, we utilize other four adapters to transform the LLM-encoded tokens into multi-scale visual features, generating multi-scale deformation fields and facilitating the coarse-to-fine MDIR task
arXiv Detail & Related papers (2024-08-20T09:58:30Z) - MrRegNet: Multi-resolution Mask Guided Convolutional Neural Network for Medical Image Registration with Large Deformations [6.919880141683284]
MrRegNet is a mask-guided encoder-decoder DCNN-based image registration method.
Image alignment accuracies are significantly improved at local regions guided by segmentation masks.
arXiv Detail & Related papers (2024-05-16T12:57:03Z) - Pyramid Attention Network for Medical Image Registration [4.142556531859984]
We propose a pyramid attention network (PAN) for deformable medical image registration.
PAN incorporates a dual-stream pyramid encoder with channel-wise attention to boost the feature representation.
Our method achieves favorable registration performance, while outperforming several CNN-based and Transformer-based registration networks.
arXiv Detail & Related papers (2024-02-14T08:46:18Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Recurrence With Correlation Network for Medical Image Registration [66.63200823918429]
We present Recurrence with Correlation Network (RWCNet), a medical image registration network with multi-scale features and a cost volume layer.
We demonstrate that these architectural features improve medical image registration accuracy in two image registration datasets.
arXiv Detail & Related papers (2023-02-05T02:41:46Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Studying Robustness of Semantic Segmentation under Domain Shift in
cardiac MRI [0.8858288982748155]
We study challenges and opportunities of domain transfer across images from multiple clinical centres and scanner vendors.
In this work, we build upon a fixed U-Net architecture configured by the nnU-net framework to investigate various data augmentation techniques and batch normalization layers.
arXiv Detail & Related papers (2020-11-15T17:50:23Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks [20.87045880678701]
High-level structure information can provide reliable similarity measurement for cross-modality images.
This work presents a new MAS framework for cross-modality images, where both image registration and label fusion are achieved by deep neural networks (DNNs)
For image registration, we propose a consistent registration network, which can jointly estimate forward and backward dense displacement fields (DDFs)
For label fusion, we adapt a few-shot learning network to measure the similarity of atlas and target patches.
arXiv Detail & Related papers (2020-08-15T02:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.