Lightweight CycleGAN Models for Cross-Modality Image Transformation and Experimental Quality Assessment in Fluorescence Microscopy
- URL: http://arxiv.org/abs/2510.15579v1
- Date: Fri, 17 Oct 2025 12:20:18 GMT
- Title: Lightweight CycleGAN Models for Cross-Modality Image Transformation and Experimental Quality Assessment in Fluorescence Microscopy
- Authors: Mohammad Soltaninezhad, Yashar Rouzbahani, Jhonatan Contreras, Rohan Chippalkatti, Daniel Kwaku Abankwa, Christian Eggeling, Thomas Bocklitz,
- Abstract summary: We present a lightweight CycleGAN for modality transfer in fluorescence microscopy (confocal to super-resolution STED/deconvolved STED)<n>By replacing the traditional channel-doubling strategy in the U-Net-based generator with a fixed channel approach, we drastically reduce trainable parameters from 41.8 million to approximately nine thousand, achieving superior performance with faster training and lower memory usage.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lightweight deep learning models offer substantial reductions in computational cost and environmental impact, making them crucial for scientific applications. We present a lightweight CycleGAN for modality transfer in fluorescence microscopy (confocal to super-resolution STED/deconvolved STED), addressing the common challenge of unpaired datasets. By replacing the traditional channel-doubling strategy in the U-Net-based generator with a fixed channel approach, we drastically reduce trainable parameters from 41.8 million to approximately nine thousand, achieving superior performance with faster training and lower memory usage. We also introduce the GAN as a diagnostic tool for experimental and labeling quality. When trained on high-quality images, the GAN learns the characteristics of optimal imaging; deviations between its generated outputs and new experimental images can reveal issues such as photobleaching, artifacts, or inaccurate labeling. This establishes the model as a practical tool for validating experimental accuracy and image fidelity in microscopy workflows.
Related papers
- Diffusion-Based Synthetic Brightfield Microscopy Images for Enhanced Single Cell Detection [0.0]
We investigate the use of unconditional models to generate synthetic brightfield microscopy images.<n>A U-Net based diffusion model was trained and used to create datasets with varying ratios of synthetic and real images.<n>Experiments with YOLOv8, YOLOv9 and RT-DETR reveal that training with synthetic data can achieve improved detection accuracies.
arXiv Detail & Related papers (2025-11-25T08:57:23Z) - High-Throughput Low-Cost Segmentation of Brightfield Microscopy Live Cell Images [3.175346985850522]
This study focuses on segmenting unstained live cells imaged with bright-field microscopy.<n>We developed a low-cost CNN-based pipeline incorporating comparative analysis of frozen encoders.<n>The model was validated on a public dataset featuring diverse live cell variants.
arXiv Detail & Related papers (2025-08-17T22:05:58Z) - Ultra-Resolution Adaptation with Ease [62.56434979517156]
We propose a set of key guidelines for ultra-resolution adaptation termed emphURAE.<n>We show that tuning minor components of the weight matrices outperforms widely-used low-rank adapters when synthetic data are unavailable.<n>Experiments validate that URAE achieves comparable 2K-generation performance to state-of-the-art closed-source models like FLUX1.1 [Pro] Ultra with only 3K samples and 2K iterations.
arXiv Detail & Related papers (2025-03-20T16:44:43Z) - Grayscale to Hyperspectral at Any Resolution Using a Phase-Only Lens [0.0]
We study the problem of reconstructing a HxWx31 hyperspectral image from a HxW grayscale snapshot measurement.<n>We make efficient use of limited data by training a conditional denoising diffusion model that operates on small patches.<n>Experiments reveal that patch sizes as small as the PSFs support achieve excellent results.
arXiv Detail & Related papers (2024-12-03T20:00:21Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Enhancing Fluorescence Lifetime Parameter Estimation Accuracy with Differential Transformer Based Deep Learning Model Incorporating Pixelwise Instrument Response Function [0.3441582801949978]
Fluorescence Lifetime Imaging (FLI) provides unique information about the tissue microenvironment.<n>Recent advancements in Deep Learning have enabled improved fluorescence lifetime parameter estimation.<n>We present MFliNet, a novel DL architecture that integrates the Instrument Response Function (IRF) as an additional input alongside experimental photon time-of-arrival histograms.
arXiv Detail & Related papers (2024-11-25T20:03:41Z) - LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network [0.31402652384742363]
This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images.
Lucyd combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network.
Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images.
arXiv Detail & Related papers (2023-07-16T10:34:23Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.