Diffusion-based Virtual Staining from Polarimetric Mueller Matrix Imaging
- URL: http://arxiv.org/abs/2503.01352v1
- Date: Mon, 03 Mar 2025 09:45:27 GMT
- Title: Diffusion-based Virtual Staining from Polarimetric Mueller Matrix Imaging
- Authors: Xiaoyu Zheng, Jing Wen, Jiaxin Zhuang, Yao Du, Jing Cong, Limei Guo, Chao He, Lin Luo, Hao Chen,
- Abstract summary: We propose a Regulated Bridge Diffusion Model (RBDM) for polarization-based virtual staining.<n>RBDM learns the mapping from polarization images to other modalities such as H&E and fluorescence.<n>Experiment results show that our model greatly outperforms other benchmark methods.
- Score: 8.016374085889172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polarization, as a new optical imaging tool, has been explored to assist in the diagnosis of pathology. Moreover, converting the polarimetric Mueller Matrix (MM) to standardized stained images becomes a promising approach to help pathologists interpret the results. However, existing methods for polarization-based virtual staining are still in the early stage, and the diffusion-based model, which has shown great potential in enhancing the fidelity of the generated images, has not been studied yet. In this paper, a Regulated Bridge Diffusion Model (RBDM) for polarization-based virtual staining is proposed. RBDM utilizes the bidirectional bridge diffusion process to learn the mapping from polarization images to other modalities such as H\&E and fluorescence. And to demonstrate the effectiveness of our model, we conduct the experiment on our manually collected dataset, which consists of 18,000 paired polarization, fluorescence and H\&E images, due to the unavailability of the public dataset. The experiment results show that our model greatly outperforms other benchmark methods. Our dataset and code will be released upon acceptance.
Related papers
- Polarization Uncertainty-Guided Diffusion Model for Color Polarization Image Demosaicking [3.5335358134182937]
CPDM aims to reconstruct full-resolution polarization images of four directions from the color-polarization filter array (CPFA) raw image.<n>We introduce the image diffusion prior from text-to-image (T2I) models to overcome the performance bottleneck of network-based methods.
arXiv Detail & Related papers (2026-02-27T09:39:07Z) - Direct Dual-Energy CT Material Decomposition using Model-based Denoising Diffusion Model [105.95160543743984]
We propose a deep learning procedure called Dual-Energy Decomposition Model-based Diffusion (DEcomp-MoD) for quantitative material decomposition.<n>We show that DEcomp-MoD outperform state-of-the-art unsupervised score-based model and supervised deep learning networks.
arXiv Detail & Related papers (2025-07-24T01:00:06Z) - PolarAnything: Diffusion-based Polarimetric Image Synthesis [59.14294818211059]
We propose PolarAnything, capable of synthesizing polarization images from a single RGB input with both photorealism and physical accuracy.<n>Experiments show that our model generates high-quality polarization images and supports downstream tasks like shape from polarization.
arXiv Detail & Related papers (2025-07-23T07:09:10Z) - Shape from Polarization of Thermal Emission and Reflection [2.7317088388886384]
We leverage the Shape from Polarization (SfP) technique in the Long-Wave Infrared (LWIR) spectrum, where most materials are opaque and emissive.<n>We formulated a polarization model that explicitly accounts for the combined effects of emission and reflection.<n>We implemented a prototype system and created ThermoPol, the first real-world benchmark dataset for LWIR SfP.
arXiv Detail & Related papers (2025-06-23T00:33:17Z) - Beyond H&E: Unlocking Pathological Insights with Polarization via Self-supervised Learning [9.290835226997961]
Histopathology is fundamental to digital pathology, with hematoxylin and eosin staining as the gold standard for diagnostic and prognostic assessments.
While H&E imaging effectively highlights cellular and tissue structures, it lacks sensitivity to birefringence and tissue anisotropy.
We propose PolarHE, a dual modality fusion framework that integrates H&E with polarization imaging.
arXiv Detail & Related papers (2025-03-05T05:00:19Z) - Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection [28.82743020243849]
Existing text-to-image diffusion models often fail to maintain high image quality and high prompt-image alignment for challenging prompts.
We propose diffusion self-reflection that alternately performs denoising and inversion.
We derive Zigzag Diffusion Sampling (Z-Sampling), a novel self-reflection-based diffusion sampling method.
arXiv Detail & Related papers (2024-12-14T16:42:41Z) - Fundus image enhancement through direct diffusion bridges [44.31666331817371]
We propose FD3, a fundus image enhancement method based on direct diffusion bridges.
We first propose a synthetic forward model through a human feedback loop with board-certified ophthalmologists.
We train a robust and flexible diffusion-based image enhancement network that is highly effective as a stand-alone method.
arXiv Detail & Related papers (2024-09-19T00:26:14Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Ambient Diffusion Posterior Sampling: Solving Inverse Problems with
Diffusion Models trained on Corrupted Data [56.81246107125692]
Ambient Diffusion Posterior Sampling (A-DPS) is a generative model pre-trained on one type of corruption.
We show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance.
We extend the Ambient Diffusion framework to train MRI models with access only to Fourier subsampled multi-coil MRI measurements.
arXiv Detail & Related papers (2024-03-13T17:28:20Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - Detecting Images Generated by Deep Diffusion Models using their Local
Intrinsic Dimensionality [8.968599131722023]
Diffusion models have been successfully applied for the visual synthesis of strikingly realistic appearing images.
This raises strong concerns about their potential for malicious purposes.
We propose using the lightweight multi Local Intrinsic Dimensionality (multiLID) for the automatic detection of synthetic images.
arXiv Detail & Related papers (2023-07-05T15:03:10Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Persistently Trained, Diffusion-assisted Energy-based Models [18.135784288023928]
We introduce diffusion data and learn a joint EBM, called diffusion assisted-EBMs, through persistent training.
We show that persistently trained EBMs can simultaneously achieve long-run stability, post-training image generation, and superior out-of-distribution detection.
arXiv Detail & Related papers (2023-04-21T02:29:18Z) - DIRE for Diffusion-Generated Image Detection [128.95822613047298]
We propose a novel representation called DIffusion Reconstruction Error (DIRE)
DIRE measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model.
It provides a hint that DIRE can serve as a bridge to distinguish generated and real images.
arXiv Detail & Related papers (2023-03-16T13:15:03Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - Hyperspectral-Multispectral Image Fusion with Weighted LASSO [68.04032419397677]
We propose an approach for fusing hyperspectral and multispectral images to provide high-quality hyperspectral output.
We demonstrate that the proposed sparse fusion and reconstruction provides quantitatively superior results when compared to existing methods on publicly available images.
arXiv Detail & Related papers (2020-03-15T23:07:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.