Sparse deep computer-generated holography for optical microscopy
- URL: http://arxiv.org/abs/2111.15178v1
- Date: Tue, 30 Nov 2021 07:34:17 GMT
- Title: Sparse deep computer-generated holography for optical microscopy
- Authors: Alex Liu, Laura Waller, Yi Xue
- Abstract summary: Computer-generated holography (CGH) has broad applications such as direct-view display, virtual and augmented reality, as well as optical microscopy.
We propose a CGH algorithm using an unsupervised generative model designed for optical microscopy to synthesize 3D selected illumination.
The algorithm, named sparse deep CGH, is able to generate sparsely distributed points in a large 3D volume with higher contrast than conventional CGH algorithms.
- Score: 2.578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer-generated holography (CGH) has broad applications such as
direct-view display, virtual and augmented reality, as well as optical
microscopy. CGH usually utilizes a spatial light modulator that displays a
computer-generated phase mask, modulating the phase of coherent light in order
to generate customized patterns. The algorithm that computes the phase mask is
the core of CGH and is usually tailored to meet different applications. CGH for
optical microscopy usually requires 3D accessibility (i.e., generating
overlapping patterns along the $z$-axis) and micron-scale spatial precision.
Here, we propose a CGH algorithm using an unsupervised generative model
designed for optical microscopy to synthesize 3D selected illumination. The
algorithm, named sparse deep CGH, is able to generate sparsely distributed
points in a large 3D volume with higher contrast than conventional CGH
algorithms.
Related papers
- Self-Supervised Z-Slice Augmentation for 3D Bio-Imaging via Knowledge Distillation [65.46249968484794]
ZAugNet is a fast, accurate, and self-supervised deep learning method for enhancing z-resolution in biological images.
By performing nonlinear distances between consecutive slices, ZAugNet effectively doubles resolution with each iteration.
ZAugNet+ is an extended version enabling continuous prediction at arbitrary distances.
arXiv Detail & Related papers (2025-03-05T17:50:35Z) - Super-Resolution of 3D Micro-CT Images Using Generative Adversarial Networks: Enhancing Resolution and Segmentation Accuracy [0.0]
We develop a procedure for improving the quality of segmented 3D micro-CT images of rocks with a Machine Learning (ML) Generative Model.
The proposed model enhances the resolution eightfold (8x) and addresses segmentation inaccuracies due to the overlapping X-ray attenuation in micro-CT measurement for different rock minerals and phases.
We achieved high-quality super-resolved 3D images with a resolution of 0.4375 micro-m/voxel and accurate segmentation for constituting minerals and pore space.
arXiv Detail & Related papers (2025-01-12T21:33:06Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - L3DG: Latent 3D Gaussian Diffusion [74.36431175937285]
L3DG is the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation.
We employ a sparse convolutional architecture to efficiently operate on room-scale scenes.
By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time.
arXiv Detail & Related papers (2024-10-17T13:19:32Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Computational 3D topographic microscopy from terabytes of data per
sample [2.4657541547959387]
We present a large-scale computational 3D topographic microscope that enables 6-gigapixel profilometric 3D imaging at micron-scale resolution.
We developed a self-supervised neural network-based algorithm for 3D reconstruction and stitching that jointly estimates an all-in-focus photometric composite and 3D height map.
To demonstrate the broad utility of our new computational microscope, we applied STARCAM to a variety of decimeter-scale objects.
arXiv Detail & Related papers (2023-06-05T07:09:21Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - S^2-Transformer for Mask-Aware Hyperspectral Image Reconstruction [59.39343894089959]
A snapshot compressive imager (CASSI) with Transformer reconstruction backend remarks high-fidelity sensing performance.
dominant spatial and spectral attention designs show limitations in hyperspectral modeling.
We propose a spatial-spectral (S2-) Transformer implemented by a paralleled attention design and a mask-aware learning strategy.
arXiv Detail & Related papers (2022-09-24T19:26:46Z) - Time-multiplexed Neural Holography: A flexible framework for holographic
near-eye displays with fast heavily-quantized spatial light modulators [44.73608798155336]
Holographic near-eye displays offer unprecedented capabilities for virtual and augmented reality systems.
We report advances in camera-calibrated wave propagation models for these types of holographic near-eye displays.
Our framework is flexible in supporting runtime supervision with different types of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields.
arXiv Detail & Related papers (2022-05-05T00:03:50Z) - Optimization of phase-only holograms calculated with scaled diffraction
calculation through deep neural networks [6.554534012462403]
Computer-generated holograms (CGHs) are used in holographic three-dimensional (3D) displays and holographic projections.
The quality of the reconstructed images using phase-only CGHs is degraded because the amplitude of the reconstructed image is difficult to control.
In this study, we use deep learning to optimize phase-only CGHs generated using scaled diffraction computations and the random phase-free method.
arXiv Detail & Related papers (2021-12-02T00:14:11Z) - Lensless multicore-fiber microendoscope for real-time tailored light
field generation with phase encoder neural network (CoreNet) [0.5505013339790825]
A novel phase deep neural network (CoreNet) can generate accurate tailored CGHs for MCF encoders at a near video-rate.
CoreNet can speed up the computation time by two magnitudes and increase the fidelity of the generated light field.
This paves the avenue for real-time cell rotation and several further applications that require real-time high-fidelity light delivery in biomedicine.
arXiv Detail & Related papers (2021-11-24T19:37:32Z) - Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image
Reconstruction [127.20208645280438]
Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement.
Modeling the inter-spectra interactions is beneficial for HSI reconstruction.
Mask-guided Spectral-wise Transformer (MST) proposes a novel framework for HSI reconstruction.
arXiv Detail & Related papers (2021-11-15T16:59:48Z) - Programmable 3D snapshot microscopy with Fourier convolutional networks [3.2156268397508314]
3D snapshot microscopy enables volumetric imaging as fast as a camera allows by capturing a 3D volume in a single 2D camera image.
We introduce a class of global kernel Fourier convolutional neural networks which can efficiently integrate the globally mixed information encoded in a 3D snapshot image.
arXiv Detail & Related papers (2021-04-21T16:09:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.