BigFUSE: Global Context-Aware Image Fusion in Dual-View Light-Sheet
Fluorescence Microscopy with Image Formation Prior
- URL: http://arxiv.org/abs/2309.01865v2
- Date: Fri, 3 Nov 2023 06:08:31 GMT
- Title: BigFUSE: Global Context-Aware Image Fusion in Dual-View Light-Sheet
Fluorescence Microscopy with Image Formation Prior
- Authors: Yu Liu, Gesine Muller, Nassir Navab, Carsten Marr, Jan Huisken,
Tingying Peng
- Abstract summary: We propose BigFUSE, a global context-aware image fuser that stabilizes image fusion in light-sheet fluorescence microscopy (LSFM)
Inspired by the image formation prior in dual-view LSFM, image fusion is considered as estimating a focus-defocus boundary using Bayes Theorem.
Competitive experimental results show that BigFUSE is the first dual-view LSFM fuser that is able to exclude structured artifacts when fusing information.
- Score: 40.22867974147714
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Light-sheet fluorescence microscopy (LSFM), a planar illumination technique
that enables high-resolution imaging of samples, experiences defocused image
quality caused by light scattering when photons propagate through thick
tissues. To circumvent this issue, dualview imaging is helpful. It allows
various sections of the specimen to be scanned ideally by viewing the sample
from opposing orientations. Recent image fusion approaches can then be applied
to determine in-focus pixels by comparing image qualities of two views locally
and thus yield spatially inconsistent focus measures due to their limited
field-of-view. Here, we propose BigFUSE, a global context-aware image fuser
that stabilizes image fusion in LSFM by considering the global impact of photon
propagation in the specimen while determining focus-defocus based on local
image qualities. Inspired by the image formation prior in dual-view LSFM, image
fusion is considered as estimating a focus-defocus boundary using Bayes
Theorem, where (i) the effect of light scattering onto focus measures is
included within Likelihood; and (ii) the spatial consistency regarding
focus-defocus is imposed in Prior. The expectation-maximum algorithm is then
adopted to estimate the focus-defocus boundary. Competitive experimental
results show that BigFUSE is the first dual-view LSFM fuser that is able to
exclude structured artifacts when fusing information, highlighting its
abilities of automatic image fusion.
Related papers
- A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for
Loss-free Multi-Exposure Image Fusion [60.221404321514086]
Multi-exposure image fusion (MEF) has emerged as a prominent solution to address the limitations of digital imaging in representing varied exposure levels.
This paper presents a Hybrid-Supervised Dual-Search approach for MEF, dubbed HSDS-MEF, which introduces a bi-level optimization search scheme for automatic design of both network structures and loss functions.
arXiv Detail & Related papers (2023-09-03T08:07:26Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Towards Reducing Severe Defocus Spread Effects for Multi-Focus Image
Fusion via an Optimization Based Strategy [22.29205225281694]
Multi-focus image fusion (MFF) is a popular technique to generate an all-in-focus image.
This paper presents an optimization-based approach to reduce defocus spread effects.
Experiments conducted on the real-world dataset verify superiority of the proposed model.
arXiv Detail & Related papers (2020-12-29T09:26:41Z) - Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy [6.09170287691728]
We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
arXiv Detail & Related papers (2020-10-08T14:20:16Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.