DeStripe: A Self2Self Spatio-Spectral Graph Neural Network with Unfolded
Hessian for Stripe Artifact Removal in Light-sheet Microscopy
- URL: http://arxiv.org/abs/2206.13419v1
- Date: Mon, 27 Jun 2022 16:13:57 GMT
- Title: DeStripe: A Self2Self Spatio-Spectral Graph Neural Network with Unfolded
Hessian for Stripe Artifact Removal in Light-sheet Microscopy
- Authors: Yu Liu, Kurt Weiss, Nassir Navab, Carsten Marr, Jan Huisken, Tingying
Peng
- Abstract summary: We propose a blind artifact removal algorithm in light-sheet fluorescence microscopy (LSFM) called DeStripe.
DeStripe localizes the potentially corrupted coefficients by exploiting the structural difference between unidirectional artifacts and foreground images.
Affected coefficients can then be fed into a graph neural network for recovery with a Hessian regularization unrolled to further ensure structures in the standard image space are well preserved.
- Score: 40.223974943121874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Light-sheet fluorescence microscopy (LSFM) is a cutting-edge volumetric
imaging technique that allows for three-dimensional imaging of mesoscopic
samples with decoupled illumination and detection paths. Although the selective
excitation scheme of such a microscope provides intrinsic optical sectioning
that minimizes out-of-focus fluorescence background and sample photodamage, it
is prone to light absorption and scattering effects, which results in uneven
illumination and striping artifacts in the images adversely. To tackle this
issue, in this paper, we propose a blind stripe artifact removal algorithm in
LSFM, called DeStripe, which combines a self-supervised spatio-spectral graph
neural network with unfolded Hessian prior. Specifically, inspired by the
desirable properties of Fourier transform in condensing striping information
into isolated values in the frequency domain, DeStripe firstly localizes the
potentially corrupted Fourier coefficients by exploiting the structural
difference between unidirectional stripe artifacts and more isotropic
foreground images. Affected Fourier coefficients can then be fed into a graph
neural network for recovery, with a Hessian regularization unrolled to further
ensure structures in the standard image space are well preserved. Since in
realistic, stripe-free LSFM barely exists with a standard image acquisition
protocol, DeStripe is equipped with a Self2Self denoising loss term, enabling
artifact elimination without access to stripe-free ground truth images.
Competitive experimental results demonstrate the efficacy of DeStripe in
recovering corrupted biomarkers in LSFM with both synthetic and real stripe
artifacts.
Related papers
- Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Limited-View Photoacoustic Imaging Reconstruction Via High-quality Self-supervised Neural Representation [4.274771298029378]
We introduce a self-supervised network termed HIgh-quality Self-supervised neural representation (HIS)
HIS tackles the inverse problem of photoacoustic imaging to reconstruct high-quality photoacoustic images from sensor data acquired under limited viewpoints.
Results indicate that the proposed HIS model offers superior image reconstruction quality compared to three commonly used methods for photoacoustic image reconstruction.
arXiv Detail & Related papers (2024-07-04T06:07:54Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - FreeSeed: Frequency-band-aware and Self-guided Network for Sparse-view
CT Reconstruction [34.91517935951518]
Sparse-view computed tomography (CT) is a promising solution for expediting the scanning process and mitigating radiation exposure to patients.
Recently, deep learning-based image post-processing methods have shown promising results.
We propose a simple yet effective FREquency-band-awarE and SElf-guidED network, termed FreeSeed, which can effectively remove artifact and recover missing detail.
arXiv Detail & Related papers (2023-07-12T03:39:54Z) - Fluctuation-based deconvolution in fluorescence microscopy using
plug-and-play denoisers [2.236663830879273]
spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light.
Several deconvolution and super-resolution techniques have been proposed to overcome this limitation.
arXiv Detail & Related papers (2023-03-20T15:43:52Z) - DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement [1.2645663389012574]
Low-light image enhancement is a classical computer vision problem aiming to recover normal-exposure images from low-light images.
convolutional neural networks commonly used in this field are good at sampling low-frequency local structural features in the spatial domain.
We propose a novel module using the Fourier coefficients, which can recover high-quality texture details under the constraint of semantics in the frequency phase.
arXiv Detail & Related papers (2022-09-16T13:56:09Z) - Convolutional Neural Network Denoising in Fluorescence Lifetime Imaging
Microscopy (FLIM) [16.558653673949838]
Fluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal-to-noise ratio (SNR), and expensive and challenging hardware setups.
In this work, we demonstrate applying a denoising convolutional network to improve FLIM SNR.
The network will be integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components.
arXiv Detail & Related papers (2021-03-07T03:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.