Denoising of Two-Phase Optically Sectioned Structured Illumination Reconstructions Using Encoder-Decoder Networks
- URL: http://arxiv.org/abs/2510.03452v1
- Date: Fri, 03 Oct 2025 19:19:42 GMT
- Title: Denoising of Two-Phase Optically Sectioned Structured Illumination Reconstructions Using Encoder-Decoder Networks
- Authors: Allison Davis, Yezhi Shen, Xiaoyu Ji, Fengqing Zhu,
- Abstract summary: In two-phase optical-sectioning SI (OS-SI), reduced acquisition time introduces residual artifacts that conventional denoising struggles to suppress.<n>Deep learning offers an alternative to traditional methods; however, supervised training is limited by the lack of clean, optically sectioned ground-truth data.<n>We investigate encoder-decoder networks for artifact reduction in two-phase OS-SI, using synthetic training pairs formed by applying real artifact fields to synthetic images.
- Score: 13.72081933421932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structured illumination (SI) enhances image resolution and contrast by projecting patterned light onto a sample. In two-phase optical-sectioning SI (OS-SI), reduced acquisition time introduces residual artifacts that conventional denoising struggles to suppress. Deep learning offers an alternative to traditional methods; however, supervised training is limited by the lack of clean, optically sectioned ground-truth data. We investigate encoder-decoder networks for artifact reduction in two-phase OS-SI, using synthetic training pairs formed by applying real artifact fields to synthetic images. An asymmetrical denoising autoencoder (DAE) and a U-Net are trained on the synthetic data, then evaluated on real OS-SI images. Both networks improve image clarity, with each excelling against different artifact types. These results demonstrate that synthetic training enables supervised denoising of OS-SI images and highlight the potential of encoder-decoder networks to streamline reconstruction workflows.
Related papers
- Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising [1.912429179274357]
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions.<n>We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions.
arXiv Detail & Related papers (2025-06-26T17:28:32Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.<n>Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.<n>We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline
and Jitter-Aware Restoration Network [26.86292926584254]
Linear Array Pushbroom (LAP) imaging technology is widely used in the realm of remote sensing.
Traditional methods for restoring LAP images, such as algorithms estimating the point spread function (PSF), exhibit limited performance.
We propose a Jitter-Aware Restoration Network (JARNet) to remove the distortion and blur in two stages.
arXiv Detail & Related papers (2024-01-16T07:26:26Z) - INFWIDE: Image and Feature Space Wiener Deconvolution Network for
Non-blind Image Deblurring in Low-Light Conditions [32.35378513394865]
We propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE)
INFWIDE removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space.
Experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
arXiv Detail & Related papers (2022-07-17T15:22:31Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - Attention Based Real Image Restoration [48.933507352496726]
Deep convolutional neural networks perform better on images containing synthetic degradations.
This paper proposes a novel single-stage blind real image restoration network (R$2$Net)
arXiv Detail & Related papers (2020-04-26T04:21:49Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional
Neural Networks [30.410981386006394]
Deep learning techniques have outperformed classical model-based despeckling algorithms.
In this paper, we propose a self-supervised Bayesian despeckling method.
We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data.
arXiv Detail & Related papers (2020-01-15T12:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.