Cross-Reality Re-Rendering: Manipulating between Digital and Physical
Realities
- URL: http://arxiv.org/abs/2211.08005v1
- Date: Tue, 15 Nov 2022 09:31:52 GMT
- Title: Cross-Reality Re-Rendering: Manipulating between Digital and Physical
Realities
- Authors: Siddhartha Datta
- Abstract summary: We investigate the design of a system that enables users to manipulate the perception of both their physical realities and digital realities.
Users can inspect their view history from either reality, and generate interventions that can be interoperably rendered cross-reality in real-time.
- Score: 2.538209532048867
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The advent of personalized reality has arrived. Rapid development in AR/MR/VR
enables users to augment or diminish their perception of the physical world.
Robust tooling for digital interface modification enables users to change how
their software operates. As digital realities become an increasingly-impactful
aspect of human lives, we investigate the design of a system that enables users
to manipulate the perception of both their physical realities and digital
realities. Users can inspect their view history from either reality, and
generate interventions that can be interoperably rendered cross-reality in
real-time. Personalized interventions can be generated with mask, text, and
model hooks. Collaboration between users scales the availability of
interventions. We verify our implementation against our design requirements
with cognitive walkthroughs, personas, and scalability tests.
Related papers
- Haptic Repurposing with GenAI [5.424247121310253]
Mixed Reality aims to merge the digital and physical worlds to create immersive human-computer interactions.
This paper introduces Haptic Repurposing with GenAI, an innovative approach to enhance MR interactions by transforming any physical objects into adaptive haptic interfaces for AI-generated virtual assets.
arXiv Detail & Related papers (2024-06-11T13:06:28Z) - Tremor Reduction for Accessible Ray Based Interaction in VR Applications [0.0]
Many traditional 2D interface interaction methods have been directly converted to work in a VR space with little alteration to the input mechanism.
In this paper we propose the use of a low pass filter, to normalize user input noise, alleviating fine motor requirements during ray-based interaction.
arXiv Detail & Related papers (2024-05-12T17:07:16Z) - Practical and Rich User Digitization [7.021516368759671]
User digitization allows computers to intimately understand their users, capturing activity, pose, routine, and behavior.
Today's consumer devices offer coarse digital representations of users with metrics such as step count, heart rate, and a handful of human activities like running and biking.
My research aims to break this trend, developing sensing systems that increase user digitization fidelity to create new and powerful computing experiences.
arXiv Detail & Related papers (2024-02-29T22:09:27Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - Learning-based pose edition for efficient and interactive design [55.41644538483948]
In computer-aided animation artists define the key poses of a character by manipulating its skeletons.
Character pose must respect many ill-defined constraints, and so the resulting realism greatly depends on the animator's skill and knowledge.
We describe an efficient tool for pose design, allowing users to intuitively manipulate a pose to create character animations.
arXiv Detail & Related papers (2021-07-01T12:15:02Z) - Efficient Realistic Data Generation Framework leveraging Deep
Learning-based Human Digitization [0.0]
The proposed method takes as input real background images and populates them with human figures in various poses.
A benchmarking and evaluation in the corresponding tasks shows that synthetic data can be effectively used as a supplement to real data.
arXiv Detail & Related papers (2021-06-28T08:07:31Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.