DEEP$^2$: Deep Learning Powered De-scattering with Excitation Patterning
- URL: http://arxiv.org/abs/2210.10892v2
- Date: Fri, 21 Oct 2022 07:44:50 GMT
- Title: DEEP$^2$: Deep Learning Powered De-scattering with Excitation Patterning
- Authors: Navodini Wijethilake, Mithunjha Anandakumar, Cheng Zheng, Peter T. C.
So, Murat Yildirim, Dushan N. Wadduwage
- Abstract summary: 'De-scattering with Excitation Patterning or DEEP' is a widefield alternative to point-scanning.
We present DEEP$2$, a deep learning based model, that can de-scatter images from just tens of patterned excitations instead of hundreds.
We demonstrate our method in multiple numerical and physical experiments including in-vivo cortical vasculature imaging up to four scattering lengths deep, in alive mice.
- Score: 3.637479539861615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Limited throughput is a key challenge in in-vivo deep-tissue imaging using
nonlinear optical microscopy. Point scanning multiphoton microscopy, the
current gold standard, is slow especially compared to the wide-field imaging
modalities used for optically cleared or thin specimens. We recently introduced
'De-scattering with Excitation Patterning or DEEP', as a widefield alternative
to point-scanning geometries. Using patterned multiphoton excitation, DEEP
encodes spatial information inside tissue before scattering. However, to
de-scatter at typical depths, hundreds of such patterned excitations are
needed. In this work, we present DEEP$^2$, a deep learning based model, that
can de-scatter images from just tens of patterned excitations instead of
hundreds. Consequently, we improve DEEP's throughput by almost an order of
magnitude. We demonstrate our method in multiple numerical and physical
experiments including in-vivo cortical vasculature imaging up to four
scattering lengths deep, in alive mice.
Related papers
- Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - Facial Depth and Normal Estimation using Single Dual-Pixel Camera [81.02680586859105]
We introduce a DP-oriented Depth/Normal network that reconstructs the 3D facial geometry.
It contains the corresponding ground-truth 3D models including depth map and surface normal in metric scale.
It achieves state-of-the-art performances over recent DP-based depth/normal estimation methods.
arXiv Detail & Related papers (2021-11-25T05:59:27Z) - Imaging dynamics beneath turbid media via parallelized single-photon
detection [32.148006108515716]
We take advantage of a single-photon avalanche diode (SPAD) array camera, with over one thousand detectors, to simultaneously detect speckle fluctuations at the single-photon level.
We then apply a deep neural network to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating liquid tissue phantoms.
arXiv Detail & Related papers (2021-07-03T12:32:21Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Calibrating Self-supervised Monocular Depth Estimation [77.77696851397539]
In the recent years, many methods demonstrated the ability of neural networks to learn depth and pose changes in a sequence of images, using only self-supervision as the training signal.
We show that incorporating prior information about the camera configuration and the environment, we can remove the scale ambiguity and predict depth directly, still using the self-supervised formulation and not relying on any additional sensors.
arXiv Detail & Related papers (2020-09-16T14:35:45Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Guidestar-free image-guided wavefront-shaping [7.919213739992463]
We present a new concept, image-guided wavefront-shaping, allowing non-invasive, guidestar-free, widefield, incoherent imaging through highly scattering layers, without illumination control.
Most importantly, the wavefront-correction is found even for objects that are larger than the memory-effect range, by blindly optimizing image-quality metrics.
arXiv Detail & Related papers (2020-07-08T08:26:14Z) - Reconstructing undersampled photoacoustic microscopy images using deep
learning [11.74890470096844]
We propose a novel application of deep learning principles to reconstruct undersampled PAM images.
Our results have collectively demonstrated the robust performance of our model to reconstruct PAM images with as few as 2% of the original pixels.
arXiv Detail & Related papers (2020-05-30T12:39:52Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Single-shot autofocusing of microscopy images using deep learning [0.30586855806896046]
Deep learning-based offline autofocusing method, termed Deep-R, is trained to rapidly and blindly autofocus a single-shot microscopy image.
Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods.
arXiv Detail & Related papers (2020-03-21T06:07:27Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.