Anatomical feature-prioritized loss for enhanced MR to CT translation
- URL: http://arxiv.org/abs/2410.10328v2
- Date: Thu, 24 Oct 2024 20:28:53 GMT
- Title: Anatomical feature-prioritized loss for enhanced MR to CT translation
- Authors: Arthur Longuefosse, Baudouin Denis de Senneville, Gael Dournes, Ilyes Benlala, Pascal Desbarats, Fabien Baldacci,
- Abstract summary: Traditional methods for image translation and synthesis are generally optimized for global image reconstruction.
This study introduces a novel anatomical feature-prioritized (AFP) loss function into the synthesis process.
The AFP loss function can replace or complement global reconstruction methods, ensuring a balanced emphasis on both global image fidelity and local structural details.
- Score: 0.0479796063938004
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In medical image synthesis, the precision of localized structural details is crucial, particularly when addressing specific clinical requirements such as the identification and measurement of fine structures. Traditional methods for image translation and synthesis are generally optimized for global image reconstruction but often fall short in providing the finesse required for detailed local analysis. This study represents a step toward addressing this challenge by introducing a novel anatomical feature-prioritized (AFP) loss function into the synthesis process. This method enhances reconstruction by focusing on clinically significant structures, utilizing features from a pre-trained model designed for a specific downstream task, such as the segmentation of particular anatomical regions. The AFP loss function can replace or complement global reconstruction methods, ensuring a balanced emphasis on both global image fidelity and local structural details. Various implementations of this loss function are explored, including its integration into different synthesis networks such as GAN-based and CNN-based models. Our approach is applied and evaluated in two contexts: lung MR to CT translation, focusing on high-quality reconstruction of bronchial structures, using a private dataset; and pelvis MR to CT synthesis, targeting the accurate representation of organs and muscles, utilizing a public dataset from the Synthrad2023 challenge. We leverage embeddings from pre-trained segmentation models specific to these anatomical regions to demonstrate the capability of the AFP loss to prioritize and accurately reconstruct essential features. This tailored approach shows promising potential for enhancing the specificity and practicality of medical image synthesis in clinical applications.
Related papers
- GAN-Based Architecture for Low-dose Computed Tomography Imaging Denoising [1.0138723409205497]
Generative Adversarial Networks (GANs) have surfaced as a revolutionary element within the domain of low-dose computed tomography (LDCT) imaging.
This comprehensive review synthesizes the rapid advancements in GAN-based LDCT denoising techniques.
arXiv Detail & Related papers (2024-11-14T15:26:10Z) - A Quantitative Evaluation of Dense 3D Reconstruction of Sinus Anatomy
from Monocular Endoscopic Video [8.32570164101507]
We perform a quantitative analysis of a self-supervised approach for sinus reconstruction using endoscopic sequences and optical tracking.
Our results show that the generated reconstructions are in high agreement with the anatomy, yielding an average point-to-mesh error of 0.91 mm.
We identify that pose and depth estimation inaccuracies contribute equally to this error and that locally consistent sequences with shorter trajectories generate more accurate reconstructions.
arXiv Detail & Related papers (2023-10-22T17:11:40Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Abdominal organ segmentation via deep diffeomorphic mesh deformations [5.4173776411667935]
Abdominal organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems.
We employ template-based mesh reconstruction methods for joint liver, kidney, pancreas, and spleen segmentation.
The resulting method, UNetFlow, generalizes well to all four organs and can be easily fine-tuned on new data.
arXiv Detail & Related papers (2023-06-27T14:41:18Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - A Hybrid Approach to Full-Scale Reconstruction of Renal Arterial Network [5.953404851562665]
We propose a hybrid framework to build subject-specific models of the renal vascular network.
We use semi-automated segmentation of large arteries and estimation of cortex area from a micro-CT scan as a starting point.
Our results show a statistical correspondence between the reconstructed data and existing anatomical data obtained from a rat kidney.
arXiv Detail & Related papers (2023-03-03T10:39:25Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Specificity-Preserving Federated Learning for MR Image Reconstruction [94.58912814426122]
Federated learning can be used to improve data privacy and efficiency in magnetic resonance (MR) image reconstruction.
Recent FL techniques tend to solve this by enhancing the generalization of the global model.
We propose a specificity-preserving FL algorithm for MR image reconstruction (FedMRI)
arXiv Detail & Related papers (2021-12-09T22:13:35Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Learning to Segment Anatomical Structures Accurately from One Exemplar [34.287877547953194]
Methods that permit to produce accurate anatomical structure segmentation without using a large amount of fully annotated training images are highly desirable.
We propose Contour Transformer Network (CTN), a one-shot anatomy segmentor including a naturally built-in human-in-the-loop mechanism.
We demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning approaches.
arXiv Detail & Related papers (2020-07-06T20:27:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.