Dehazing Light Microscopy Images with Guided Conditional Flow Matching: finding a sweet spot between fidelity and realism
- URL: http://arxiv.org/abs/2506.22397v3
- Date: Tue, 01 Jul 2025 09:23:16 GMT
- Title: Dehazing Light Microscopy Images with Guided Conditional Flow Matching: finding a sweet spot between fidelity and realism
- Authors: Anirban Ray, Ashesh, Florian Jug,
- Abstract summary: We propose HazeMatching, a novel iterative method for dehazing light microscopy images.<n>Our goal was to find a balanced trade-off between the fidelity of the dehazing results and the realism of individual predictions.<n>Our method is compared against 7 baselines, achieving a consistent balance between fidelity and realism on average.
- Score: 6.87862884496602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fluorescence microscopy is a major driver of scientific progress in the life sciences. Although high-end confocal microscopes are capable of filtering out-of-focus light, cheaper and more accessible microscopy modalities, such as widefield microscopy, can not, which consequently leads to hazy image data. Computational dehazing is trying to combine the best of both worlds, leading to cheap microscopy but crisp-looking images. The perception-distortion trade-off tells us that we can optimize either for data fidelity, e.g. low MSE or high PSNR, or for data realism, measured by perceptual metrics such as LPIPS or FID. Existing methods either prioritize fidelity at the expense of realism, or produce perceptually convincing results that lack quantitative accuracy. In this work, we propose HazeMatching, a novel iterative method for dehazing light microscopy images, which effectively balances these objectives. Our goal was to find a balanced trade-off between the fidelity of the dehazing results and the realism of individual predictions (samples). We achieve this by adapting the conditional flow matching framework by guiding the generative process with a hazy observation in the conditional velocity field. We evaluate HazeMatching on 5 datasets, covering both synthetic and real data, assessing both distortion and perceptual quality. Our method is compared against 7 baselines, achieving a consistent balance between fidelity and realism on average. Additionally, with calibration analysis, we show that HazeMatching produces well-calibrated predictions. Note that our method does not need an explicit degradation operator to exist, making it easily applicable on real microscopy data. All data used for training and evaluation and our code will be publicly available under a permissive license.
Related papers
- Unsupervised Imaging Inverse Problems with Diffusion Distribution Matching [35.01013208265617]
This work addresses image restoration tasks through the lens of inverse problems using unpaired datasets.<n>The proposed method operates under minimal assumptions and relies only on small, unpaired datasets.<n>It is particularly well-suited for real-world scenarios, where the forward model is often unknown or misspecified.
arXiv Detail & Related papers (2025-06-17T15:06:43Z) - A Bias-Free Training Paradigm for More General AI-generated Image Detection [15.421102443599773]
A well-designed forensic detector should detect generator specific artifacts rather than reflect data biases.<n>We propose B-Free, a bias-free training paradigm, where fake images are generated from real ones.<n>We show significant improvements in both generalization and robustness over state-of-the-art detectors.
arXiv Detail & Related papers (2024-12-23T15:54:32Z) - LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Learned, uncertainty-driven adaptive acquisition for photon-efficient scanning microscopy [12.356716251834566]
We propose a method to simultaneously denoise and predict pixel-wise uncertainty for scanning microscopy systems.<n>We demonstrate our method on experimental confocal and multiphoton microscopy systems, showing that our uncertainty maps can pinpoint hallucinations in the deep learned predictions.
arXiv Detail & Related papers (2023-10-24T18:06:03Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - Integrating Prior Knowledge in Contrastive Learning with Kernel [4.050766659420731]
We use kernel theory to propose a novel loss, called decoupled uniformity, that i) allows the integration of prior knowledge and ii) removes the negative-positive coupling in the original InfoNCE loss.
In an unsupervised setting, we empirically demonstrate that CL benefits from generative models to improve its representation both on natural and medical images.
arXiv Detail & Related papers (2022-06-03T15:43:08Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - From Synthetic to Real: Image Dehazing Collaborating with Unlabeled Real
Data [58.50411487497146]
We propose a novel image dehazing framework collaborating with unlabeled real data.
First, we develop a disentangled image dehazing network (DID-Net), which disentangles the feature representations into three component maps.
Then a disentangled-consistency mean-teacher network (DMT-Net) is employed to collaborate unlabeled real data for boosting single image dehazing.
arXiv Detail & Related papers (2021-08-06T04:00:28Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Optical Flow Dataset Synthesis from Unpaired Images [36.158607790844705]
We introduce a novel method to build a training set of pseudo-real images that can be used to train optical flow in a supervised manner.
Our dataset uses two unpaired frames from real data and creates pairs of frames by simulating random warps.
We thus obtain the benefit of directly training on real data while having access to an exact ground truth.
arXiv Detail & Related papers (2021-04-02T22:19:47Z) - Practical sensorless aberration estimation for 3D microscopy with deep
learning [1.6662996732774467]
We show that neural networks trained only on simulated data yield accurate predictions for real experimental images.
We also study the predictability of individual aberrations with respect to their data requirements and find that the symmetry of the wavefront plays a crucial role.
arXiv Detail & Related papers (2020-06-02T17:39:32Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.