Enhancing Perception and Immersion in Pre-Captured Environments through
Learning-Based Eye Height Adaptation
- URL: http://arxiv.org/abs/2308.13042v1
- Date: Thu, 24 Aug 2023 19:14:28 GMT
- Title: Enhancing Perception and Immersion in Pre-Captured Environments through
Learning-Based Eye Height Adaptation
- Authors: Qi Feng, Hubert P. H. Shum, Shigeo Morishima
- Abstract summary: We propose a learning-based approach for novel views for omnidirectional images with altered eye heights.
With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation.
- Score: 19.959897524064353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-captured immersive environments using omnidirectional cameras provide a
wide range of virtual reality applications. Previous research has shown that
manipulating the eye height in egocentric virtual environments can
significantly affect distance perception and immersion. However, the influence
of eye height in pre-captured real environments has received less attention due
to the difficulty of altering the perspective after finishing the capture
process. To explore this influence, we first propose a pilot study that
captures real environments with multiple eye heights and asks participants to
judge the egocentric distances and immersion. If a significant influence is
confirmed, an effective image-based approach to adapt pre-captured real-world
environments to the user's eye height would be desirable. Motivated by the
study, we propose a learning-based approach for synthesizing novel views for
omnidirectional images with altered eye heights. This approach employs a
multitask architecture that learns depth and semantic segmentation in two
formats, and generates high-quality depth and semantic segmentation to
facilitate the inpainting stage. With the improved omnidirectional-aware
layered depth image, our approach synthesizes natural and realistic visuals for
eye height adaptation. Quantitative and qualitative evaluation shows favorable
results against state-of-the-art methods, and an extensive user study verifies
improved perception and immersion for pre-captured real-world environments.
Related papers
- Influence of field of view in visual prostheses design: Analysis with a VR system [3.9998518782208783]
We evaluate the influence of field of view with respect to spatial resolution in visual prostheses.
Twenty-four normally sighted participants were asked to find and recognize usual objects.
Results show that the accuracy and response time decrease when the field of view is increased.
arXiv Detail & Related papers (2025-01-28T22:25:22Z) - Towards Understanding Depth Perception in Foveated Rendering [8.442383621450247]
We present the first evaluation exploring the effects of foveated rendering on stereoscopic depth perception.
Our analysis demonstrates that stereoscopic acuity remains unaffected (or even improves) by high levels of peripheral blur.
The findings indicate that foveated rendering does not impact stereoscopic depth perception, and stereoacuity remains unaffected up to 2x stronger foveation than commonly used.
arXiv Detail & Related papers (2025-01-28T16:06:29Z) - HUPE: Heuristic Underwater Perceptual Enhancement with Semantic Collaborative Learning [62.264673293638175]
Existing underwater image enhancement methods primarily focus on improving visual quality while overlooking practical implications.
We propose a invertible network for underwater perception enhancement, dubbed H, which enhances visual quality and demonstrates flexibility in handling other downstream tasks.
arXiv Detail & Related papers (2024-11-27T12:37:03Z) - Low-Light Enhancement Effect on Classification and Detection: An Empirical Study [48.6762437869172]
We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
arXiv Detail & Related papers (2024-09-22T14:21:31Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Self-supervised Interest Point Detection and Description for Fisheye and
Perspective Images [7.451395029642832]
Keypoint detection and matching is a fundamental task in many computer vision problems.
In this work, we focus on the case when this is caused by the geometry of the cameras used for image acquisition.
We build on a state-of-the-art approach and derive a self-supervised procedure that enables training an interest point detector and descriptor network.
arXiv Detail & Related papers (2023-06-02T22:39:33Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - IllumiNet: Transferring Illumination from Planar Surfaces to Virtual
Objects in Augmented Reality [38.83696624634213]
This paper presents an illumination estimation method for virtual objects in real environment by learning.
Given a single RGB image, our method directly infers the relit virtual object by transferring the illumination features extracted from planar surfaces in the scene to the desired geometries.
arXiv Detail & Related papers (2020-07-12T13:11:14Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.