Neural network processing of holographic images
- URL: http://arxiv.org/abs/2203.08898v2
- Date: Fri, 18 Mar 2022 15:08:42 GMT
- Title: Neural network processing of holographic images
- Authors: John S. Schreck, Gabrielle Gantos, Matthew Hayman, Aaron Bansemer,
David John Gagne
- Abstract summary: HOLODEC, an airborne cloud particle imager, captures holographic images of a fixed volume of cloud to characterize the types and sizes of cloud particles.
We present a hologram processing algorithm, HolodecML, that utilizes a neural segmentation model, GPUs, and computational parallelization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: HOLODEC, an airborne cloud particle imager, captures holographic images of a
fixed volume of cloud to characterize the types and sizes of cloud particles,
such as water droplets and ice crystals. Cloud particle properties include
position, diameter, and shape. We present a hologram processing algorithm,
HolodecML, that utilizes a neural segmentation model, GPUs, and computational
parallelization. HolodecML is trained using synthetically generated holograms
based on a model of the instrument, and predicts masks around particles found
within reconstructed images. From these masks, the position and size of the
detected particles can be characterized in three dimensions. In order to
successfully process real holograms, we find we must apply a series of image
corrupting transformations and noise to the synthetic images used in training.
In this evaluation, HolodecML had comparable position and size estimation
performance to the standard processing method, but improved particle detection
by nearly 20\% on several thousand manually labeled HOLODEC images. However,
the improvement only occurred when image corruption was performed on the
simulated images during training, thereby mimicking non-ideal conditions in the
actual probe. The trained model also learned to differentiate artifacts and
other impurities in the HOLODEC images from the particles, even though no such
objects were present in the training data set, while the standard processing
method struggled to separate particles from artifacts. The novelty of the
training approach, which leveraged noise as a means for parameterizing
non-ideal aspects of the HOLODEC detector, could be applied in other domains
where the theoretical model is incapable of fully describing the real-world
operation of the instrument and accurate truth data required for supervised
learning cannot be obtained from real-world observations.
Related papers
- SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - Visual Tomography: Physically Faithful Volumetric Models of Partially
Translucent Objects [0.0]
Digital 3D representations of objects can be useful for human or computer-assisted analysis.
We propose a volumetric reconstruction approach that obtains a physical model including the interior of partially translucent objects.
Our technique photographs the object under different poses in front of a bright white light source and computes absorption and scattering per voxel.
arXiv Detail & Related papers (2023-12-21T00:14:46Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Realistic Neutral Atom Image Simulation [1.3220067655295737]
Bottom-up simulator capable of generating sample images of neutral atom experiments from a description of the actual state in the simulated system.
Use cases include the creation of exemplary images for demonstration purposes, fast training iterations for deconvolution algorithms, and generation of labeled data for machine-learning atom detection approaches.
arXiv Detail & Related papers (2023-10-04T14:02:18Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - Mimicking non-ideal instrument behavior for hologram processing using
neural style translation [0.0]
Holographic cloud probes provide unprecedented information on cloud particle density, size and position.
processing these holograms requires considerable computational resources, time and occasional human intervention.
Here we demonstrate the application of the neural style translation approach to the simulated holograms.
arXiv Detail & Related papers (2023-01-07T01:01:27Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning [1.4619386068190985]
Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects.
In this paper, we propose a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction.
arXiv Detail & Related papers (2020-04-25T20:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.