Deep Billboards towards Lossless Real2Sim in Virtual Reality
- URL: http://arxiv.org/abs/2208.08861v1
- Date: Mon, 8 Aug 2022 16:16:29 GMT
- Title: Deep Billboards towards Lossless Real2Sim in Virtual Reality
- Authors: Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Yutaka Matsuo, Shixiang
Shane Gu, Yoichi Ochiai
- Abstract summary: We develop Deep Billboards that model 3D objects implicitly using neural networks.
Our system, connecting a commercial VR headset with a server running neural rendering, allows real-time high-resolution simulation of detailed rigid objects.
We augment Deep Billboards with physical interaction capability, adapting classic billboards from screen-based games to immersive VR.
- Score: 20.7032774699291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An aspirational goal for virtual reality (VR) is to bring in a rich diversity
of real world objects losslessly. Existing VR applications often convert
objects into explicit 3D models with meshes or point clouds, which allow fast
interactive rendering but also severely limit its quality and the types of
supported objects, fundamentally upper-bounding the "realism" of VR. Inspired
by the classic "billboards" technique in gaming, we develop Deep Billboards
that model 3D objects implicitly using neural networks, where only 2D image is
rendered at a time based on the user's viewing direction. Our system,
connecting a commercial VR headset with a server running neural rendering,
allows real-time high-resolution simulation of detailed rigid objects, hairy
objects, actuated dynamic objects and more in an interactive VR world,
drastically narrowing the existing real-to-simulation (real2sim) gap.
Additionally, we augment Deep Billboards with physical interaction capability,
adapting classic billboards from screen-based games to immersive VR. At our
pavilion, the visitors can use our off-the-shelf setup for quickly capturing
their favorite objects, and within minutes, experience them in an immersive and
interactive VR world with minimal loss of reality. Our project page:
https://sites.google.com/view/deepbillboards/
Related papers
- VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points [4.962171160815189]
High-performance demands of virtual reality systems present challenges in utilizing fast-to-render scene representations like 3DGS.
We propose foveated rendering as a promising solution to these obstacles.
Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision.
arXiv Detail & Related papers (2024-10-23T14:54:48Z) - PanoTree: Autonomous Photo-Spot Explorer in Virtual Reality Scenes [2.4140502941897544]
In social VR, photography within a VR scene is an important indicator of visitors' activities.
We propose PanoTree, an automated photo-spot explorer in VR scenes.
A deep scoring network is trained on a large dataset of photos collected by a social VR platform to determine whether humans are likely to take similar photos.
arXiv Detail & Related papers (2024-05-27T12:54:05Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - VR.net: A Real-world Dataset for Virtual Reality Motion Sickness
Research [33.092692299254814]
We introduce VR.net', a dataset offering approximately 12-hour gameplay videos from ten real-world games in 10 diverse genres.
For each video frame, a rich set of motion sickness-related labels, such as camera/object movement, depth field, and motion flow, are accurately assigned.
We utilize a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code.
arXiv Detail & Related papers (2023-06-06T03:43:11Z) - Virtual Reality in Metaverse over Wireless Networks with User-centered
Deep Reinforcement Learning [8.513938423514636]
We introduce a multi-user VR computation offloading over wireless communication scenario.
In addition, we devised a novel user-centered deep reinforcement learning approach to find a near-optimal solution.
arXiv Detail & Related papers (2023-03-08T03:10:41Z) - OmniObject3D: Large-Vocabulary 3D Object Dataset for Realistic
Perception, Reconstruction and Generation [107.71752592196138]
We propose OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects.
It comprises 6,000 scanned objects in 190 daily categories, sharing common classes with popular 2D datasets.
Each 3D object is captured with both 2D and 3D sensors, providing textured meshes, point clouds, multiview rendered images, and multiple real-captured videos.
arXiv Detail & Related papers (2023-01-18T18:14:18Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Towards 3D VR-Sketch to 3D Shape Retrieval [128.47604316459905]
We study the use of 3D sketches as an input modality and advocate a VR-scenario where retrieval is conducted.
As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions.
arXiv Detail & Related papers (2022-09-20T22:04:31Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Pixel Codec Avatars [99.36561532588831]
Pixel Codec Avatars (PiCA) is a deep generative model of 3D human faces.
On a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene.
arXiv Detail & Related papers (2021-04-09T23:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.