Mimicking non-ideal instrument behavior for hologram processing using
neural style translation
- URL: http://arxiv.org/abs/2301.02757v1
- Date: Sat, 7 Jan 2023 01:01:27 GMT
- Title: Mimicking non-ideal instrument behavior for hologram processing using
neural style translation
- Authors: John S. Schreck, Matthew Hayman, Gabrielle Gantos, Aaron Bansemer,
David John Gagne
- Abstract summary: Holographic cloud probes provide unprecedented information on cloud particle density, size and position.
processing these holograms requires considerable computational resources, time and occasional human intervention.
Here we demonstrate the application of the neural style translation approach to the simulated holograms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Holographic cloud probes provide unprecedented information on cloud particle
density, size and position. Each laser shot captures particles within a large
volume, where images can be computationally refocused to determine particle
size and shape. However, processing these holograms, either with standard
methods or with machine learning (ML) models, requires considerable
computational resources, time and occasional human intervention. ML models are
trained on simulated holograms obtained from the physical model of the probe
since real holograms have no absolute truth labels. Using another processing
method to produce labels would be subject to errors that the ML model would
subsequently inherit. Models perform well on real holograms only when image
corruption is performed on the simulated images during training, thereby
mimicking non-ideal conditions in the actual probe (Schreck et. al, 2022).
Optimizing image corruption requires a cumbersome manual labeling effort.
Here we demonstrate the application of the neural style translation approach
(Gatys et. al, 2016) to the simulated holograms. With a pre-trained
convolutional neural network (VGG-19), the simulated holograms are ``stylized''
to resemble the real ones obtained from the probe, while at the same time
preserving the simulated image ``content'' (e.g. the particle locations and
sizes). Two image similarity metrics concur that the stylized images are more
like real holograms than the synthetic ones. With an ML model trained to
predict particle locations and shapes on the stylized data sets, we observed
comparable performance on both simulated and real holograms, obviating the need
to perform manual labeling. The described approach is not specific to hologram
images and could be applied in other domains for capturing noise and
imperfections in observational instruments to make simulated data more like
real world observations.
Related papers
- SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - Realistic Neutral Atom Image Simulation [1.3220067655295737]
Bottom-up simulator capable of generating sample images of neutral atom experiments from a description of the actual state in the simulated system.
Use cases include the creation of exemplary images for demonstration purposes, fast training iterations for deconvolution algorithms, and generation of labeled data for machine-learning atom detection approaches.
arXiv Detail & Related papers (2023-10-04T14:02:18Z) - BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike
Animated Motion [52.11972919802401]
We show that neural networks trained only on synthetic data achieve state-of-the-art accuracy on the problem of 3D human pose and shape estimation from real images.
Previous synthetic datasets have been small, unrealistic, or lacked realistic clothing.
arXiv Detail & Related papers (2023-06-29T13:35:16Z) - Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images [58.67263739579952]
We present an automatic method that allows generating human cinemagraphs from single RGB images.
At the core of our method is a novel cyclic neural network that produces looping cinemagraphs for the target loop duration.
We evaluate our method on both synthetic and real data and demonstrate that it is possible to create compelling and plausible cinemagraphs from single RGB images.
arXiv Detail & Related papers (2023-03-15T14:09:35Z) - Neural network processing of holographic images [0.0]
HOLODEC, an airborne cloud particle imager, captures holographic images of a fixed volume of cloud to characterize the types and sizes of cloud particles.
We present a hologram processing algorithm, HolodecML, that utilizes a neural segmentation model, GPUs, and computational parallelization.
arXiv Detail & Related papers (2022-03-16T19:20:37Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Learned holographic light transport [2.642698101441705]
Holography algorithms often fall short in matching simulations with results from a physical holographic display.
Our work addresses this mismatch by learning the holographic light transport in holographic displays.
Our method can dramatically improve simulation accuracy and image quality in holographic displays.
arXiv Detail & Related papers (2021-08-01T12:05:33Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep DIH : Statistically Inferred Reconstruction of Digital In-Line
Holography by Deep Learning [1.4619386068190985]
Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects.
In this paper, we propose a novel implementation of autoencoder-based deep learning architecture for single-shot hologram reconstruction.
arXiv Detail & Related papers (2020-04-25T20:39:25Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.