Multi-modality imaging with structure-promoting regularisers
- URL: http://arxiv.org/abs/2007.11689v1
- Date: Wed, 22 Jul 2020 21:26:37 GMT
- Title: Multi-modality imaging with structure-promoting regularisers
- Authors: Matthias J. Ehrhardt
- Abstract summary: A key tool for understanding and early diagnosis of cancer and dementia is PET-MR, a combined positron emission tomography and magnetic resonance imaging scanner.
In this chapter we discuss mathematical approaches which allow to combine information from several imaging modalities so that multi-modality imaging can be more than just the sum of its components.
- Score: 0.27074235008521236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imaging with multiple modalities or multiple channels is becoming
increasingly important for our modern society. A key tool for understanding and
early diagnosis of cancer and dementia is PET-MR, a combined positron emission
tomography and magnetic resonance imaging scanner which can simultaneously
acquire functional and anatomical data. Similarly in remote sensing, while
hyperspectral sensors may allow to characterise and distinguish materials,
digital cameras offer high spatial resolution to delineate objects. In both of
these examples, the imaging modalities can be considered individually or
jointly. In this chapter we discuss mathematical approaches which allow to
combine information from several imaging modalities so that multi-modality
imaging can be more than just the sum of its components.
Related papers
- An Ensemble Approach for Brain Tumor Segmentation and Synthesis [0.12777007405746044]
The integration of machine learning in magnetic resonance imaging (MRI) is proving to be incredibly effective.
Deep learning models utilize multiple layers of processing to capture intricate details of complex data.
We propose a deep learning framework that ensembles state-of-the-art architectures to achieve accurate segmentation.
arXiv Detail & Related papers (2024-11-26T17:28:51Z) - Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging [21.769547352111957]
We investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI.
We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework.
We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging.
arXiv Detail & Related papers (2024-09-28T17:40:54Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Uncertainty-Aware Multi-Parametric Magnetic Resonance Image Information
Fusion for 3D Object Segmentation [12.361668672097753]
We propose an uncertainty-aware multi-parametric MR image feature fusion method to fully exploit the information for enhanced 3D image segmentation.
Our proposed method achieves better segmentation performance when compared to existing models.
arXiv Detail & Related papers (2022-11-16T09:16:52Z) - Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes
for Medical Image Super-Resolution [56.622832383316215]
We propose a novel multi-head convolutional attention module to super-resolve CT and MRI scans.
Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple input tensors.
We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention.
arXiv Detail & Related papers (2022-04-08T07:56:55Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Robust Image Reconstruction with Misaligned Structural Information [0.27074235008521236]
We propose a variational framework which jointly performs reconstruction and registration.
Our approach is the first to achieve this for different modalities and outranks established approaches in terms of accuracy of both reconstruction and registration.
arXiv Detail & Related papers (2020-04-01T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.