Anatomy-constrained modelling of image-derived input functions in dynamic PET using multi-organ segmentation
- URL: http://arxiv.org/abs/2504.17114v1
- Date: Wed, 23 Apr 2025 21:47:05 GMT
- Title: Anatomy-constrained modelling of image-derived input functions in dynamic PET using multi-organ segmentation
- Authors: Valentin Langer, Kartikay Tehlan, Thomas Wendler,
- Abstract summary: Accurate kinetic analysis of [$18$F]FDG distribution in dynamic positron emission tomography (PET) requires anatomically constrained modelling of image-derived input functions (IDIFs)<n>This study proposes a multi-organ segmentation-based approach that integrates IDIFs from the aorta, portal vein, pulmonary artery, and ureters.
- Score: 0.6359529834975265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate kinetic analysis of [$^{18}$F]FDG distribution in dynamic positron emission tomography (PET) requires anatomically constrained modelling of image-derived input functions (IDIFs). Traditionally, IDIFs are obtained from the aorta, neglecting anatomical variations and complex vascular contributions. This study proposes a multi-organ segmentation-based approach that integrates IDIFs from the aorta, portal vein, pulmonary artery, and ureters. Using high-resolution CT segmentations of the liver, lungs, kidneys, and bladder, we incorporate organ-specific blood supply sources to improve kinetic modelling. Our method was evaluated on dynamic [$^{18}$F]FDG PET data from nine patients, resulting in a mean squared error (MSE) reduction of $13.39\%$ for the liver and $10.42\%$ for the lungs. These initial results highlight the potential of multiple IDIFs in improving anatomical modelling and fully leveraging dynamic PET imaging. This approach could facilitate the integration of tracer kinetic modelling into clinical routine.
Related papers
- A robust and versatile deep learning model for prediction of the arterial input function in dynamic small animal $\left[^{18}\ ext{F}\
ight]$FDG PET imaging [14.501528921071456]
This work proposes a non-invasive, fully convolutional deep learning-based approach (FC-DLIF) to predict input functions directly from PET imaging.<n>FC-DLIF includes a spatial feature extractor acting on the volumetric time frames of the PET sequence, and a temporal feature extractor that predicts the arterial input function.<n>Our deep learning-based input function offers a non-invasive and reliable alternative to arterial blood sampling, proving robust and flexible to temporal shifts and different scan durations.
arXiv Detail & Related papers (2025-07-03T06:55:41Z) - Personalized MR-Informed Diffusion Models for 3D PET Image Reconstruction [44.89560992517543]
We propose a simple method for generating subject-specific PET images from a dataset of PET-MR scans.<n>The images we synthesize retain information from the subject's MR scan, leading to higher resolution and the retention of anatomical features.<n>With simulated and real [$18$F]FDG datasets, we show that pre-training a personalized diffusion model with subject-specific "pseudo-PET" images improves reconstruction accuracy with low-count data.
arXiv Detail & Related papers (2025-06-04T10:24:14Z) - Cascaded 3D Diffusion Models for Whole-body 3D 18-F FDG PET/CT synthesis from Demographics [13.016275337899895]
We propose a cascaded 3D diffusion model framework to synthesize high-fidelity 3D PET/CT volumes directly from demographic variables.<n>An initial score-based diffusion model synthesizes low-resolution PET/CT volumes from demographic variables alone.<n>This is followed by a super-resolution residual diffusion model that refines spatial resolution.
arXiv Detail & Related papers (2025-05-28T15:38:33Z) - Physiological neural representation for personalised tracer kinetic parameter estimation from dynamic PET [0.7147474215053953]
We propose a physiological neural representation based on implicit neural representations (INRs) for personalized kinetic parameter estimation.<n>INRs, which learn continuous functions, allow for efficient, high-resolution parametric imaging with reduced data requirements.<n>Our findings highlight the potential of INRs for personalized, data-efficient tracer kinetic modelling, enabling applications in tumour characterization, segmentation, and prognostic assessment.
arXiv Detail & Related papers (2025-04-23T22:12:04Z) - Dynamic PET Image Prediction Using a Network Combining Reversible and Irreversible Modules [13.706949780214535]
This study proposes a dynamic frame prediction method for dynamic PET imaging.
The network can predict kinetic parameter images based on the early frames of dynamic PET images.
arXiv Detail & Related papers (2024-10-30T03:52:21Z) - KaLDeX: Kalman Filter based Linear Deformable Cross Attention for Retina Vessel Segmentation [46.57880203321858]
We propose a novel network (KaLDeX) for vascular segmentation leveraging a Kalman filter based linear deformable cross attention (LDCA) module.
Our approach is based on two key components: Kalman filter (KF) based linear deformable convolution (LD) and cross-attention (CA) modules.
The proposed method is evaluated on retinal fundus image datasets (DRIVE, CHASE_BD1, and STARE) as well as the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-10-28T16:00:42Z) - KLDD: Kalman Filter based Linear Deformable Diffusion Model in Retinal Image Segmentation [51.03868117057726]
This paper proposes a novel Kalman filter based Linear Deformable Diffusion (KLDD) model for retinal vessel segmentation.
Our model employs a diffusion process that iteratively refines the segmentation, leveraging the flexible receptive fields of deformable convolutions.
Experiments are evaluated on retinal fundus image datasets (DRIVE, CHASE_DB1) and the 3mm and 6mm of the OCTA-500 dataset.
arXiv Detail & Related papers (2024-09-19T14:21:38Z) - PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image Segmentation [5.056996354878645]
When both CT and PET scans are available, it is common to combine them as two channels of the input to the segmentation model.
This method requires both scan types during training and inference, posing a challenge due to the limited availability of PET scans.
We propose a parameter-efficient multi-modal adaptation framework for lightweight upgrading of a transformer-based segmentation model.
arXiv Detail & Related papers (2024-04-21T16:29:49Z) - Revolutionizing Disease Diagnosis with simultaneous functional PET/MR and Deeply Integrated Brain Metabolic, Hemodynamic, and Perfusion Networks [40.986069119392944]
We propose MX-ARM, a multimodal MiXture-of-experts Alignment Reconstruction and Model.
It is modality detachable and exchangeable, allocating different multi-layer perceptrons dynamically ("mixture of experts") through learnable weights to learn respective representations from different modalities.
arXiv Detail & Related papers (2024-03-29T08:47:49Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in Brain Images [39.94162291765236]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - Whole-Body Lesion Segmentation in 18F-FDG PET/CT [11.662584140924725]
The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body.
We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume.
arXiv Detail & Related papers (2022-09-16T10:49:53Z) - Three-dimensional micro-structurally informed in silico myocardium --
towards virtual imaging trials in cardiac diffusion weighted MRI [58.484353709077034]
We propose a novel method to generate a realistic numerical phantom of myocardial microstructure.
In-silico tissue models enable evaluating quantitative models of magnetic resonance imaging.
arXiv Detail & Related papers (2022-08-22T22:01:44Z) - Translational Lung Imaging Analysis Through Disentangled Representations [0.0]
We present a model capable of extracting disentangled information from images of different animal models and the mechanisms that generate the images.
It is optimized from images of pathological lung infected by Tuberculosis and is able: a) from an input slice, infer its position in a volume, the animal model to which it belongs, the damage present and even more, generate a mask covering the whole lung.
arXiv Detail & Related papers (2022-03-03T11:56:20Z) - Learning Tubule-Sensitive CNNs for Pulmonary Airway and Artery-Vein
Segmentation in CT [45.93021999366973]
Training convolutional neural networks (CNNs) for segmentation of pulmonary airway, artery, and vein is challenging.
We present a CNNs-based method for accurate airway and artery-vein segmentation in non-contrast computed tomography.
It enjoys superior sensitivity to tenuous peripheral bronchioles, arterioles, and venules.
arXiv Detail & Related papers (2020-12-10T15:56:08Z) - Rethinking the Extraction and Interaction of Multi-Scale Features for
Vessel Segmentation [53.187152856583396]
We propose a novel deep learning model called PC-Net to segment retinal vessels and major arteries in 2D fundus image and 3D computed tomography angiography (CTA) scans.
In PC-Net, the pyramid squeeze-and-excitation (PSE) module introduces spatial information to each convolutional block, boosting its ability to extract more effective multi-scale features.
arXiv Detail & Related papers (2020-10-09T08:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.