Structure and Design of HoloGen
- URL: http://arxiv.org/abs/2006.10509v1
- Date: Thu, 18 Jun 2020 13:29:46 GMT
- Title: Structure and Design of HoloGen
- Authors: Peter J. Christopher and Timothy D. Wilkinson
- Abstract summary: CGH can fully represent a light field including depth of focus, accommodation and vergence.
HoloGen is an MIT licensed application that may be used to generate holograms using a wide array of algorithms without expert guidance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing popularity of augmented and mixed reality systems has seen a
similar increase of interest in 2D and 3D computer generated holography (CGH).
Unlike stereoscopic approaches, CGH can fully represent a light field including
depth of focus, accommodation and vergence. Along with existing
telecommunications, imaging, projection, lithography, beam shaping and optical
tweezing applications, CGH is an exciting technique applicable to a wide array
of photonic problems including full 3D representation. Traditionally, the
primary roadblock to acceptance has been the significant numerical processing
required to generate holograms requiring both significant expertise and
significant computational power. This article discusses the structure and
design of HoloGen. HoloGen is an MIT licensed application that may be used to
generate holograms using a wide array of algorithms without expert guidance.
HoloGen uses a Cuda C and C++ backend with a C# and Windows Presentation
Framework graphical user interface. The article begins by introducing HoloGen
before providing an in-depth discussion of its design and structure. Particular
focus is given to the communication, data transfer and algorithmic aspects.
Related papers
- HoloGS: Instant Depth-based 3D Gaussian Splatting with Microsoft HoloLens 2 [1.1874952582465603]
We leverage the capabilities of the Microsoft HoloLens 2 for instant 3D Gaussian Splatting.
We present HoloGS, a novel workflow utilizing HoloLens sensor data, which bypasses the need for pre-processing steps.
We evaluate our approach on two self-captured scenes: An outdoor scene of a cultural heritage statue and an indoor scene of a fine-structured plant.
arXiv Detail & Related papers (2024-05-03T11:08:04Z) - Holo-VQVAE: VQ-VAE for phase-only holograms [1.534667887016089]
Holography stands at the forefront of visual technology innovation, offering immersive, three-dimensional visualizations through the manipulation of light wave amplitude and phase.
Modern research in hologram generation has predominantly focused on image-to-hologram conversion, producing holograms from existing images.
We present Holo-VQVAE, a novel generative framework tailored for phase-only holograms (POHs)
arXiv Detail & Related papers (2024-03-29T15:27:28Z) - Configurable Learned Holography [33.45219677645646]
We introduce a learned model that interactively computes 3D holograms from RGB-only 2D images for a variety of holographic displays.
We enable our hologram computations to rely on identifying the correlation between depth estimation and 3D hologram synthesis tasks.
arXiv Detail & Related papers (2024-03-24T13:57:30Z) - GS-CLIP: Gaussian Splatting for Contrastive Language-Image-3D
Pretraining from Real-World Data [73.06536202251915]
3D Shape represented as point cloud has achieve advancements in multimodal pre-training to align image and language descriptions.
We propose GS-CLIP for the first attempt to introduce 3DGS into multimodal pre-training to enhance 3D representation.
arXiv Detail & Related papers (2024-02-09T05:46:47Z) - OmniSCV: An Omnidirectional Synthetic Image Generator for Computer
Vision [5.2178708158547025]
We present a tool for generating datasets of omnidirectional images with semantic and depth information.
These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4.
We include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems.
arXiv Detail & Related papers (2024-01-30T14:40:19Z) - Holodeck: Language Guided Generation of 3D Embodied AI Environments [84.16126434848829]
Holodeck is a system that generates 3D environments to match a user-supplied prompt fully automatedly.
We show that annotators prefer Holodeck over manually designed procedural baselines in residential scenes.
We also demonstrate an exciting application of Holodeck in Embodied AI, training agents to navigate in novel scenes without human-constructed data.
arXiv Detail & Related papers (2023-12-14T16:04:14Z) - Multiview Compressive Coding for 3D Reconstruction [77.95706553743626]
We introduce a simple framework that operates on 3D points of single objects or whole scenes.
Our model, Multiview Compressive Coding, learns to compress the input appearance and geometry to predict the 3D structure.
arXiv Detail & Related papers (2023-01-19T18:59:52Z) - GH-Feat: Learning Versatile Generative Hierarchical Features from GANs [61.208757845344074]
We show that a generative feature learned from image synthesis exhibits great potentials in solving a wide range of computer vision tasks.
We first train an encoder by considering the pretrained StyleGAN generator as a learned loss function.
The visual features produced by our encoder, termed as Generative Hierarchical Features (GH-Feat), highly align with the layer-wise GAN representations.
arXiv Detail & Related papers (2023-01-12T21:59:46Z) - Optimization of phase-only holograms calculated with scaled diffraction
calculation through deep neural networks [6.554534012462403]
Computer-generated holograms (CGHs) are used in holographic three-dimensional (3D) displays and holographic projections.
The quality of the reconstructed images using phase-only CGHs is degraded because the amplitude of the reconstructed image is difficult to control.
In this study, we use deep learning to optimize phase-only CGHs generated using scaled diffraction computations and the random phase-free method.
arXiv Detail & Related papers (2021-12-02T00:14:11Z) - Learned Spatial Representations for Few-shot Talking-Head Synthesis [68.3787368024951]
We propose a novel approach for few-shot talking-head synthesis.
We show that this disentangled representation leads to a significant improvement over previous methods.
arXiv Detail & Related papers (2021-04-29T17:59:42Z) - 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure
Prior [50.73148041205675]
The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation.
We propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation.
Our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks.
arXiv Detail & Related papers (2020-03-31T09:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.