Axial-to-lateral super-resolution for 3D fluorescence microscopy using
unsupervised deep learning
- URL: http://arxiv.org/abs/2104.09435v1
- Date: Mon, 19 Apr 2021 16:31:12 GMT
- Title: Axial-to-lateral super-resolution for 3D fluorescence microscopy using
unsupervised deep learning
- Authors: Hyoungjun Park, Myeongsu Na, Bumju Kim, Soohyun Park, Ki Hean Kim,
Sunghoe Chang, and Jong Chul Ye
- Abstract summary: We present a deep-learning-enabled unsupervised super-resolution technique that enhances anisotropic images in fluorescence microscopy.
Our method greatly reduces the effort to put into practice as the training of a network requires as little as a single 3D image stack.
We demonstrate that the trained network not only enhances axial resolution beyond the diffraction limit, but also enhances suppressed visual details between the imaging planes and removes imaging artifacts.
- Score: 19.515134844947717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Volumetric imaging by fluorescence microscopy is often limited by anisotropic
spatial resolution from inferior axial resolution compared to the lateral
resolution. To address this problem, here we present a deep-learning-enabled
unsupervised super-resolution technique that enhances anisotropic images in
volumetric fluorescence microscopy. In contrast to the existing deep learning
approaches that require matched high-resolution target volume images, our
method greatly reduces the effort to put into practice as the training of a
network requires as little as a single 3D image stack, without a priori
knowledge of the image formation process, registration of training data, or
separate acquisition of target data. This is achieved based on the optimal
transport driven cycle-consistent generative adversarial network that learns
from an unpaired matching between high-resolution 2D images in lateral image
plane and low-resolution 2D images in the other planes. Using fluorescence
confocal microscopy and light-sheet microscopy, we demonstrate that the trained
network not only enhances axial resolution beyond the diffraction limit, but
also enhances suppressed visual details between the imaging planes and removes
imaging artifacts.
Related papers
- Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior [4.1326413814647545]
Training a learning-based 3D super-resolution model requires ground truth isotropic volumes and suffers from the curse of dimensionality.
Existing methods utilize 2D neural networks to reconstruct each axial slice, eventually piecing together the entire volume.
We present a reconstruction framework based on implicit neural representation (INR), which allows 3D coherency even when optimized by independent axial slices.
arXiv Detail & Related papers (2024-08-16T09:14:12Z) - Tsang's resolution enhancement method for imaging with focused illumination [42.41481706562645]
We experimentally demonstrate superior lateral resolution and enhanced image quality compared to either method alone.
This result paves the way for integrating spatial demultiplexing into existing microscopes.
arXiv Detail & Related papers (2024-05-31T16:25:05Z) - Super-resolution of biomedical volumes with 2D supervision [84.5255884646906]
Masked slice diffusion for super-resolution exploits the inherent equivalence in the data-generating distribution across all spatial dimensions of biological specimens.
We focus on the application of SliceR to stimulated histology (SRH), characterized by its rapid acquisition of high-resolution 2D images but slow and costly optical z-sectioning.
arXiv Detail & Related papers (2024-04-15T02:41:55Z) - Deep learning network to correct axial and coronal eye motion in 3D OCT
retinal imaging [65.47834983591957]
We propose deep learning based neural networks to correct axial and coronal motion artifacts in OCT based on a single scan.
The experimental result shows that the proposed method can effectively correct motion artifacts and achieve smaller error than other methods.
arXiv Detail & Related papers (2023-05-27T03:55:19Z) - Passive superresolution imaging of incoherent objects [63.942632088208505]
Method consists of measuring the field's spatial mode components in the image plane in the overcomplete basis of Hermite-Gaussian modes and their superpositions.
Deep neural network is used to reconstruct the object from these measurements.
arXiv Detail & Related papers (2023-04-19T15:53:09Z) - Two-Photon Interference LiDAR Imaging [0.0]
We present a quantum interference inspired approach to LiDAR which achieves OCT depth resolutions without the need for high levels of stability.
We demonstrate depth imaging capabilities with an effective impulse response of 70 mum, thereby allowing ranging and multiple reflections to be discerned with much higher resolution than conventional LiDAR approaches.
This enhanced resolution opens up avenues for LiDAR in 3D facial recognition, and small feature detection/tracking as well as enhancing the capabilities of more complex time-of-flight methods such as imaging through obscurants and non-line-of-sight imaging.
arXiv Detail & Related papers (2022-06-20T09:08:51Z) - Low dosage 3D volume fluorescence microscopy imaging using compressive
sensing [0.0]
We present a compressive sensing (CS) based approach to fully reconstruct 3D volumes with the same signal-to-noise ratio (SNR) with less than half of the excitation dosage.
We demonstrate our technique by capturing a 3D volume of the RFP labeled neurons in the zebrafish embryo spinal cord with the axial sampling of 0.1um using a confocal microscope.
The developed CS-based methodology in this work can be easily applied to other deep imaging modalities such as two-photon and light-sheet microscopy, where reducing sample photo-toxicity is a critical challenge.
arXiv Detail & Related papers (2022-01-03T18:44:50Z) - 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos [107.36352212367179]
We propose RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme.
The proposed method is able to learn 3D body pose and shape across different resolutions with one single model.
We extend the RSC-Net to handle low-resolution videos and apply it to reconstruct textured 3D pedestrians from low-resolution input.
arXiv Detail & Related papers (2021-03-11T06:52:12Z) - Deep learning-based super-resolution fluorescence microscopy on small
datasets [20.349746411933495]
Deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images.
We demonstrate a new convolutional neural network-based approach that is successfully trained with small datasets and super-resolution images.
This model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
arXiv Detail & Related papers (2021-03-07T03:17:47Z) - 3D Human Shape and Pose from a Single Low-Resolution Image with
Self-Supervised Learning [105.49950571267715]
Existing deep learning methods for 3D human shape and pose estimation rely on relatively high-resolution input images.
We propose RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme.
We show that both these new training losses provide robustness when learning 3D shape and pose in a weakly-supervised manner.
arXiv Detail & Related papers (2020-07-27T16:19:52Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.