Augment Yourself: Mixed Reality Self-Augmentation Using Optical
See-through Head-mounted Displays and Physical Mirrors
- URL: http://arxiv.org/abs/2007.02884v1
- Date: Mon, 6 Jul 2020 16:53:47 GMT
- Title: Augment Yourself: Mixed Reality Self-Augmentation Using Optical
See-through Head-mounted Displays and Physical Mirrors
- Authors: Mathias Unberath, Kevin Yu, Roghayeh Barmaki, Alex Johnson, Nassir
Navab
- Abstract summary: Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user.
We propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user.
Our system, to the best of our knowledge the first of its kind, estimates the user's pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather
- Score: 49.49841698372575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical see-though head-mounted displays (OST HMDs) are one of the key
technologies for merging virtual objects and physical scenes to provide an
immersive mixed reality (MR) environment to its user. A fundamental limitation
of HMDs is, that the user itself cannot be augmented conveniently as, in casual
posture, only the distal upper extremities are within the field of view of the
HMD. Consequently, most MR applications that are centered around the user, such
as virtual dressing rooms or learning of body movements, cannot be realized
with HMDs. In this paper, we propose a novel concept and prototype system that
combines OST HMDs and physical mirrors to enable self-augmentation and provide
an immersive MR environment centered around the user. Our system, to the best
of our knowledge the first of its kind, estimates the user's pose in the
virtual image generated by the mirror using an RGBD camera attached to the HMD
and anchors virtual objects to the reflection rather than the user directly. We
evaluate our system quantitatively with respect to calibration accuracy and
infrared signal degradation effects due to the mirror, and show its potential
in applications where large mirrors are already an integral part of the
facility. Particularly, we demonstrate its use for virtual fitting rooms,
gaming applications, anatomy learning, and personal fitness. In contrast to
competing devices such as LCD-equipped smart mirrors, the proposed system
consists of only an HMD with RGBD camera and, thus, does not require a prepared
environment making it very flexible and generic. In future work, we will aim to
investigate how the system can be optimally used for physical rehabilitation
and personal training as a promising application.
Related papers
- HMD$^2$: Environment-aware Motion Generation from Single Egocentric Head-Mounted Device [41.563572075062574]
This paper investigates the online generation of realistic full-body human motion using a single head-mounted device with an outward-facing color camera.
We introduce a novel system, HMD$2$, designed to balance between motion reconstruction and generation.
arXiv Detail & Related papers (2024-09-20T11:46:48Z) - HMD-Poser: On-Device Real-time Human Motion Tracking from Scalable
Sparse Observations [28.452132601844717]
We propose HMD-Poser, the first unified approach to recover full-body motions using scalable sparse observations from HMD and body-worn IMUs.
A lightweight temporal-spatial feature learning network is proposed in HMD-Poser to guarantee that the model runs in real-time on HMDs.
Extensive experimental results on the challenging AMASS dataset show that HMD-Poser achieves new state-of-the-art results in both accuracy and real-time performance.
arXiv Detail & Related papers (2024-03-06T09:10:36Z) - Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck
Muscle Contraction [21.654553113159665]
We measure, model, and predict VR users' neck muscle contraction levels (MCL) while they move their heads to interact with the virtual environment.
We develop a bio-physically inspired computational model to predict neck MCL under diverse head kinematic states.
We hope this research will motivate new ergonomic-centered designs for VR/AR and interactive graphics applications.
arXiv Detail & Related papers (2023-08-28T18:58:01Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - Attention based Occlusion Removal for Hybrid Telepresence Systems [5.006086647446482]
We propose a novel attention-enabled encoder-decoder architecture for HMD de-occlusion.
We report superior qualitative and quantitative results over state-of-the-art methods.
We also present applications of this approach to hybrid video teleconferencing using existing animation and 3D face reconstruction pipelines.
arXiv Detail & Related papers (2021-12-02T10:18:22Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.