Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations
- URL: http://arxiv.org/abs/2011.11759v1
- Date: Mon, 9 Nov 2020 11:27:45 GMT
- Title: Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations
- Authors: Luc Lafitte, R\'emi Giraud, Cornel Zachiu, Mario Ries, Olivier Sutter,
Antoine Petit, Olivier Seror, Clair Poignard, Baudouin Denis de Senneville
- Abstract summary: Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
- Score: 0.6285581681015912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various multi-modal imaging sensors are currently involved at different steps
of an interventional therapeutic work-flow. Cone beam computed tomography
(CBCT), computed tomography (CT) or Magnetic Resonance (MR) images thereby
provides complementary functional and/or structural information of the targeted
region and organs at risk. Merging this information relies on a correct spatial
alignment of the observed anatomy between the acquired images. This can be
achieved by the means of multi-modal deformable image registration (DIR),
demonstrated to be capable of estimating dense and elastic deformations between
images acquired by multiple imaging devices. However, due to the typically
different field-of-view (FOV) sampled across the various imaging modalities,
such algorithms may severely fail in finding a satisfactory solution.
In the current study we propose a new fast method to align the FOV in
multi-modal 3D medical images. To this end, a patch-based approach is
introduced and combined with a state-of-the-art multi-modal image similarity
metric in order to cope with multi-modal medical images. The occurrence of
estimated patch shifts is computed for each spatial direction and the shift
value with maximum occurrence is selected and used to adjust the image
field-of-view.
We show that a regional registration approach using voxel patches provides a
good structural compromise between the voxel-wise and "global shifts"
approaches. The method was thereby beneficial for CT to CBCT and MRI to CBCT
registration tasks, especially when highly different image FOVs are involved.
Besides, the benefit of the method for CT to CBCT and MRI to CBCT image
registration is analyzed, including the impact of artifacts generated by
percutaneous needle insertions. Additionally, the computational needs are
demonstrated to be compatible with clinical constraints in the practical case
of on-line procedures.
Related papers
- Unsupervised Multimodal 3D Medical Image Registration with Multilevel Correlation Balanced Optimization [22.633633605566214]
We propose an unsupervised multimodal medical image registration method based on multilevel correlation balanced optimization.
For preoperative medical images in different modalities, the alignment and stacking of valid information is achieved by the maximum fusion between deformation fields.
arXiv Detail & Related papers (2024-09-08T09:38:59Z) - Unlocking the Power of Spatial and Temporal Information in Medical Multimodal Pre-training [99.2891802841936]
We introduce the Med-ST framework for fine-grained spatial and temporal modeling.
For spatial modeling, Med-ST employs the Mixture of View Expert (MoVE) architecture to integrate different visual features from both frontal and lateral views.
For temporal modeling, we propose a novel cross-modal bidirectional cycle consistency objective by forward mapping classification (FMC) and reverse mapping regression (RMR)
arXiv Detail & Related papers (2024-05-30T03:15:09Z) - Weakly supervised alignment and registration of MR-CT for cervical cancer radiotherapy [9.060365057476133]
Cervical cancer is one of the leading causes of death in women.
We propose a preliminary spatial alignment algorithm and a weakly supervised multimodal registration network.
arXiv Detail & Related papers (2024-05-21T15:05:51Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - LocalTrans: A Multiscale Local Transformer Network for Cross-Resolution
Homography Estimation [52.63874513999119]
Cross-resolution image alignment is a key problem in multiscale giga photography.
Existing deep homography methods neglecting the explicit formulation of correspondences between them, which leads to degraded accuracy in cross-resolution challenges.
We propose a local transformer network embedded within a multiscale structure to explicitly learn correspondences between the multimodal inputs.
arXiv Detail & Related papers (2021-06-08T02:51:45Z) - Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain
Adaptation [9.659642285903418]
Cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists.
We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups.
arXiv Detail & Related papers (2021-03-05T16:22:31Z) - Adversarial Uni- and Multi-modal Stream Networks for Multimodal Image
Registration [20.637787406888478]
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies.
In this paper, we propose a novel translation-based unsupervised deformable image registration method.
Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.
arXiv Detail & Related papers (2020-07-06T14:44:06Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.