Implicit Neural Representation in Medical Imaging: A Comparative Survey
- URL: http://arxiv.org/abs/2307.16142v1
- Date: Sun, 30 Jul 2023 06:39:25 GMT
- Title: Implicit Neural Representation in Medical Imaging: A Comparative Survey
- Authors: Amirali Molaei and Amirhossein Aminimehr and Armin Tavakoli and
Amirhossein Kazerouni and Bobby Azad and Reza Azad and Dorit Merhof
- Abstract summary: Implicit neural representations (INRs) have gained prominence as a powerful paradigm in scene reconstruction and computer graphics.
This survey aims to provide a comprehensive overview of INR models in the field of medical imaging.
- Score: 3.478921293603811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representations (INRs) have gained prominence as a powerful
paradigm in scene reconstruction and computer graphics, demonstrating
remarkable results. By utilizing neural networks to parameterize data through
implicit continuous functions, INRs offer several benefits. Recognizing the
potential of INRs beyond these domains, this survey aims to provide a
comprehensive overview of INR models in the field of medical imaging. In
medical settings, numerous challenging and ill-posed problems exist, making
INRs an attractive solution. The survey explores the application of INRs in
various medical imaging tasks, such as image reconstruction, segmentation,
registration, novel view synthesis, and compression. It discusses the
advantages and limitations of INRs, highlighting their resolution-agnostic
nature, memory efficiency, ability to avoid locality biases, and
differentiability, enabling adaptation to different tasks. Furthermore, the
survey addresses the challenges and considerations specific to medical imaging
data, such as data availability, computational complexity, and dynamic clinical
scene analysis. It also identifies future research directions and
opportunities, including integration with multi-modal imaging, real-time and
interactive systems, and domain adaptation for clinical decision support. To
facilitate further exploration and implementation of INRs in medical image
analysis, we have provided a compilation of cited studies along with their
available open-source implementations on
\href{https://github.com/mindflow-institue/Awesome-Implicit-Neural-Representations-in-Medical-imaging}.
Finally, we aim to consistently incorporate the most recent and relevant papers
regularly.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - SeCo-INR: Semantically Conditioned Implicit Neural Representations for Improved Medical Image Super-Resolution [25.078280843551322]
Implicit Neural Representations (INRs) have recently advanced the field of deep learning due to their ability to learn continuous representations of signals.
We propose a novel framework, referred to as the Semantically Conditioned INR (SeCo-INR), that conditions an INR using local priors from a medical image.
Our framework learns a continuous representation of the semantic segmentation features of a medical image and utilizes it to derive the optimal INR for each semantic region of the image.
arXiv Detail & Related papers (2024-09-02T07:45:06Z) - Applying Conditional Generative Adversarial Networks for Imaging Diagnosis [3.881664394416534]
This study introduces an innovative application of Conditional Generative Adversarial Networks (C-GAN) integrated with Stacked Hourglass Networks (SHGN)
We address the problem of overfitting, common in deep learning models applied to complex imaging datasets, by augmenting data through rotation and scaling.
A hybrid loss function combining L1 and L2 reconstruction losses, enriched with adversarial training, is introduced to refine segmentation processes in intravascular ultrasound (IVUS) imaging.
arXiv Detail & Related papers (2024-07-17T23:23:09Z) - Enhance the Image: Super Resolution using Artificial Intelligence in MRI [10.00462384555522]
This chapter provides an overview of deep learning techniques for improving the spatial resolution of MRI.
We discuss challenges and potential future directions regarding the feasibility and reliability of deep learning-based MRI super-resolution.
arXiv Detail & Related papers (2024-06-19T15:19:41Z) - HyperFusion: A Hypernetwork Approach to Multimodal Integration of Tabular and Medical Imaging Data for Predictive Modeling [4.44283662576491]
We present a novel framework based on hypernetworks to fuse clinical imaging and tabular data by conditioning the image processing on the EHR's values and measurements.
We show that our framework outperforms both single-modality models and state-of-the-art MRI-tabular data fusion methods.
arXiv Detail & Related papers (2024-03-20T05:50:04Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Pathology-Aware Generative Adversarial Networks for Medical Image
Augmentation [0.22843885788439805]
Generative Adversarial Networks (GANs) can generate realistic but novel samples, and thus effectively cover the real image distribution.
This thesis contains four GAN projects aiming to present such novel applications' clinical relevance in collaboration with physicians.
arXiv Detail & Related papers (2021-06-03T15:08:14Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.