$μ$NeuFMT: Optical-Property-Adaptive Fluorescence Molecular Tomography via Implicit Neural Representation
- URL: http://arxiv.org/abs/2511.04510v1
- Date: Thu, 06 Nov 2025 16:28:30 GMT
- Title: $μ$NeuFMT: Optical-Property-Adaptive Fluorescence Molecular Tomography via Implicit Neural Representation
- Authors: Shihan Zhao, Jianru Zhang, Yanan Wu, Linlin Li, Siyuan Shen, Xingjun Zhu, Guoyan Zheng, Jiahua Jiang, Wuwei Ren,
- Abstract summary: Fluorescence Molecular Tomography (FMT) is a promising technique for non-invasive 3D visualization of fluorescent probes.<n>We propose $mu$NeuFMT, a self-supervised FMT reconstruction framework that integrates implicit neural-based scene representation with explicit physical modeling of photon propagation.<n>Our work establishes a new paradigm for robust and accurate FMT reconstruction, paving the way for more reliable molecular imaging in complex clinically related scenarios.
- Score: 17.533721844205687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fluorescence Molecular Tomography (FMT) is a promising technique for non-invasive 3D visualization of fluorescent probes, but its reconstruction remains challenging due to the inherent ill-posedness and reliance on inaccurate or often-unknown tissue optical properties. While deep learning methods have shown promise, their supervised nature limits generalization beyond training data. To address these problems, we propose $\mu$NeuFMT, a self-supervised FMT reconstruction framework that integrates implicit neural-based scene representation with explicit physical modeling of photon propagation. Its key innovation lies in jointly optimize both the fluorescence distribution and the optical properties ($\mu$) during reconstruction, eliminating the need for precise prior knowledge of tissue optics or pre-conditioned training data. We demonstrate that $\mu$NeuFMT robustly recovers accurate fluorophore distributions and optical coefficients even with severely erroneous initial values (0.5$\times$ to 2$\times$ of ground truth). Extensive numerical, phantom, and in vivo validations show that $\mu$NeuFMT outperforms conventional and supervised deep learning approaches across diverse heterogeneous scenarios. Our work establishes a new paradigm for robust and accurate FMT reconstruction, paving the way for more reliable molecular imaging in complex clinically related scenarios, such as fluorescence guided surgery.
Related papers
- Optimal transport unlocks end-to-end learning for single-molecule localization [26.867838368484268]
Single-molecule localization microscopy allows reconstructing biology-relevant structures beyond the diffraction limit.<n>Currently, efficient SMLM requires non-overlapping emitting fluorophores.<n>Recent deep-learning approaches can handle denser emissions, but they rely on variants of non-maximum suppression.<n>We propose an iterative neural network that integrates knowledge of the microscope's optical system inside our model.
arXiv Detail & Related papers (2025-12-11T14:30:16Z) - Accelerating 3D Photoacoustic Computed Tomography with End-to-End Physics-Aware Neural Operators [74.65171736966131]
Photoacoustic computed tomography (PACT) combines optical contrast with ultrasonic resolution, achieving deep-tissue imaging beyond the optical diffusion limit.<n>Current implementations require dense transducer arrays and prolonged acquisition times, limiting clinical translation.<n>We introduce Pano, an end-to-end physics-aware model that directly learns the inverse acoustic mapping from sensor measurements to volumetric reconstructions.
arXiv Detail & Related papers (2025-09-11T23:12:55Z) - MDiff-FMT: Morphology-aware Diffusion Model for Fluorescence Molecular Tomography with Small-scale Datasets [29.920562899648985]
Fluorescence molecular tomography (FMT) is a sensitive optical imaging technology widely used in biomedical research.<n>The inverse problem poses a huge challenge to FMT reconstruction.<n>We report for the first time a morphology-aware diffusion model, MDiff-FMT, based on denoising diffusion probabilistic model (DDPM) to achieve high-fidelity morphological reconstruction.
arXiv Detail & Related papers (2024-10-09T10:41:31Z) - Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Fluctuation-based deconvolution in fluorescence microscopy using
plug-and-play denoisers [2.236663830879273]
spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light.
Several deconvolution and super-resolution techniques have been proposed to overcome this limitation.
arXiv Detail & Related papers (2023-03-20T15:43:52Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - DeStripe: A Self2Self Spatio-Spectral Graph Neural Network with Unfolded
Hessian for Stripe Artifact Removal in Light-sheet Microscopy [40.223974943121874]
We propose a blind artifact removal algorithm in light-sheet fluorescence microscopy (LSFM) called DeStripe.
DeStripe localizes the potentially corrupted coefficients by exploiting the structural difference between unidirectional artifacts and foreground images.
Affected coefficients can then be fed into a graph neural network for recovery with a Hessian regularization unrolled to further ensure structures in the standard image space are well preserved.
arXiv Detail & Related papers (2022-06-27T16:13:57Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.