Towards Occlusion-Aware Multifocal Displays
- URL: http://arxiv.org/abs/2005.00946v1
- Date: Sat, 2 May 2020 23:51:11 GMT
- Title: Towards Occlusion-Aware Multifocal Displays
- Authors: Jen-Hao Rick Chang, Anat Levin, B. V. K. Vijaya Kumar, Aswin C.
Sankaranarayanan
- Abstract summary: Multifocal displays place virtual content at multiple focal planes, each at a di erent depth.
A novel ConeTilt operator provides an additional degree of freedom -- tilting the light cone emitted at each pixel of the display panel.
We demonstrate that ConeTilt can be easily implemented by a phase-only spatial light modulator.
- Score: 33.48441420074575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human visual system uses numerous cues for depth perception, including
disparity, accommodation, motion parallax and occlusion. It is incumbent upon
virtual-reality displays to satisfy these cues to provide an immersive user
experience. Multifocal displays, one of the classic approaches to satisfy the
accommodation cue, place virtual content at multiple focal planes, each at a di
erent depth. However, the content on focal planes close to the eye do not
occlude those farther away; this deteriorates the occlusion cue as well as
reduces contrast at depth discontinuities due to leakage of the defocus blur.
This paper enables occlusion-aware multifocal displays using a novel ConeTilt
operator that provides an additional degree of freedom -- tilting the light
cone emitted at each pixel of the display panel. We show that, for scenes with
relatively simple occlusion con gurations, tilting the light cones provides the
same e ect as physical occlusion. We demonstrate that ConeTilt can be easily
implemented by a phase-only spatial light modulator. Using a lab prototype, we
show results that demonstrate the presence of occlusion cues and the increased
contrast of the display at depth edges.
Related papers
- HoloChrome: Polychromatic Illumination for Speckle Reduction in Holographic Near-Eye Displays [8.958725481270807]
Holographic displays hold the promise of providing authentic depth cues, resulting in enhanced immersive visual experiences for near-eye applications.
Current holographic displays are hindered by speckle noise, which limits accurate reproduction of color and texture in displayed images.
We present HoloChrome, a polychromatic holographic display framework designed to mitigate these limitations.
arXiv Detail & Related papers (2024-10-31T17:05:44Z) - Multiple Latent Space Mapping for Compressed Dark Image Enhancement [51.112925890246444]
Existing dark image enhancement methods take compressed dark images as inputs and achieve great performance.
We propose a novel latent mapping network based on variational auto-encoder (VAE)
Comprehensive experiments demonstrate that the proposed method achieves state-of-the-art performance in compressed dark image enhancement.
arXiv Detail & Related papers (2024-03-12T13:05:51Z) - Close-up View synthesis by Interpolating Optical Flow [17.800430382213428]
The virtual viewpoint is perceived as a new technique in virtual navigation, as yet not supported due to the lack of depth information and obscure camera parameters.
We develop a bidirectional optical flow method to obtain any virtual viewpoint by proportional of optical flow.
With the ingenious application of the optical-flow-value, we achieve clear and visual-fidelity magnified results through lens stretching in any corner.
arXiv Detail & Related papers (2023-07-12T04:40:00Z) - Unveiling the Potential of Spike Streams for Foreground Occlusion
Removal from Densely Continuous Views [23.10251947174782]
We propose an innovative solution for tackling the de-occlusion problem through continuous multi-view imaging using only one spike camera.
By rapidly moving the spike camera, we continually capture the dense stream of spikes from the occluded scene.
To process the spikes, we build a novel model textbfSpkOccNet, in which we integrate information of spikes from continuous viewpoints.
arXiv Detail & Related papers (2023-07-03T08:01:43Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - OccluMix: Towards De-Occlusion Virtual Try-on by Semantically-Guided
Mixup [79.3118064406151]
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes)
Prior methods successfully preserve the character of clothing images.
Occlusion remains a pernicious effect for realistic virtual try-on.
arXiv Detail & Related papers (2023-01-03T06:29:11Z) - Locality-aware Channel-wise Dropout for Occluded Face Recognition [116.2355331029041]
Face recognition is a challenging task in unconstrained scenarios, especially when faces are partially occluded.
We propose a novel and elegant occlusion-simulation method via dropping the activations of a group of neurons in some elaborately selected channel.
Experiments on various benchmarks show that the proposed method outperforms state-of-the-art methods with a remarkable improvement.
arXiv Detail & Related papers (2021-07-20T05:53:14Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z) - Light Field View Synthesis via Aperture Disparity and Warping Confidence
Map [47.046276641506786]
This paper presents a learning-based approach to synthesize the view from an arbitrary camera position given a sparse set of images.
A key challenge for this novel view synthesis arises from the reconstruction process, when the views from different input images may not be consistent due to obstruction in the light path.
arXiv Detail & Related papers (2020-09-07T09:46:01Z) - L^2UWE: A Framework for the Efficient Enhancement of Low-Light
Underwater Images Using Local Contrast and Multi-Scale Fusion [84.11514688735183]
We present a novel single-image low-light underwater image enhancer, L2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information.
A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast.
We demonstrate the performance of L2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes.
arXiv Detail & Related papers (2020-05-28T01:57:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.