Keep It Real: a Window to Real Reality in Virtual Reality
- URL: http://arxiv.org/abs/2004.10313v3
- Date: Thu, 12 Nov 2020 07:02:43 GMT
- Title: Keep It Real: a Window to Real Reality in Virtual Reality
- Authors: Baihan Lin
- Abstract summary: We propose a new interaction paradigm in the virtual reality (VR) environments, which consists of a virtual mirror or window projected onto a virtual surface.
This technique can be applied to various videos, live streaming apps, augmented and virtual reality settings to provide an interactive and immersive user experience.
- Score: 13.173307471333619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposed a new interaction paradigm in the virtual reality (VR)
environments, which consists of a virtual mirror or window projected onto a
virtual surface, representing the correct perspective geometry of a mirror or
window reflecting the real world. This technique can be applied to various
videos, live streaming apps, augmented and virtual reality settings to provide
an interactive and immersive user experience. To support such a
perspective-accurate representation, we implemented computer vision algorithms
for feature detection and correspondence matching. To constrain the solutions,
we incorporated an automatically tuning scaling factor upon the homography
transform matrix such that each image frame follows a smooth transition with
the user in sight. The system is a real-time rendering framework where users
can engage their real-life presence with the virtual space.
Related papers
- VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - VisionaryVR: An Optical Simulation Tool for Evaluating and Optimizing
Vision Correction Solutions in Virtual Reality [0.5492530316344587]
The tool incorporates an experiment controller, a generic eye-tracking controller, a defocus simulator, and a generic VR questionnaire loader.
It enables vision scientists to increase their research tools with a robust, realistic, and fast research environment.
arXiv Detail & Related papers (2023-12-01T16:18:55Z) - ASSIST: Interactive Scene Nodes for Scalable and Realistic Indoor
Simulation [17.34617771579733]
We present ASSIST, an object-wise neural radiance field as a panoptic representation for compositional and realistic simulation.
A novel scene node data structure that stores the information of each object in a unified fashion allows online interaction in both intra- and cross-scene settings.
arXiv Detail & Related papers (2023-11-10T17:56:43Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Virtual Guidance as a Mid-level Representation for Navigation [8.712750753534532]
"Virtual Guidance" is designed to visually represent non-visual instructional signals.
We evaluate our proposed method through experiments in both simulated and real-world settings.
arXiv Detail & Related papers (2023-03-05T17:55:15Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Real-time Virtual-Try-On from a Single Example Image through Deep
Inverse Graphics and Learned Differentiable Renderers [13.894134334543363]
We propose a novel framework based on deep learning to build a real-time inverse graphics encoder.
Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable image.
Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image.
arXiv Detail & Related papers (2022-05-12T18:44:00Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - VIRT: Improving Representation-based Models for Text Matching through
Virtual Interaction [50.986371459817256]
We propose a novel textitVirtual InteRacTion mechanism, termed as VIRT, to enable full and deep interaction modeling in representation-based models.
VIRT asks representation-based encoders to conduct virtual interactions to mimic the behaviors as interaction-based models do.
arXiv Detail & Related papers (2021-12-08T09:49:28Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene
Datasets [103.54691385842314]
We propose a novel framework for creating large-scale photorealistic datasets of indoor scenes.
Our goal is to make the dataset creation process widely accessible.
This enables important applications in inverse rendering, scene understanding and robotics.
arXiv Detail & Related papers (2020-07-25T06:48:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.