HMD$^2$: Environment-aware Motion Generation from Single Egocentric Head-Mounted Device
- URL: http://arxiv.org/abs/2409.13426v1
- Date: Fri, 20 Sep 2024 11:46:48 GMT
- Title: HMD$^2$: Environment-aware Motion Generation from Single Egocentric Head-Mounted Device
- Authors: Vladimir Guzov, Yifeng Jiang, Fangzhou Hong, Gerard Pons-Moll, Richard Newcombe, C. Karen Liu, Yuting Ye, Lingni Ma,
- Abstract summary: This paper investigates the online generation of realistic full-body human motion using a single head-mounted device with an outward-facing color camera.
We introduce a novel system, HMD$2$, designed to balance between motion reconstruction and generation.
- Score: 41.563572075062574
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper investigates the online generation of realistic full-body human motion using a single head-mounted device with an outward-facing color camera and the ability to perform visual SLAM. Given the inherent ambiguity of this setup, we introduce a novel system, HMD$^2$, designed to balance between motion reconstruction and generation. From a reconstruction standpoint, our system aims to maximally utilize the camera streams to produce both analytical and learned features, including head motion, SLAM point cloud, and image embeddings. On the generative front, HMD$^2$ employs a multi-modal conditional motion Diffusion model, incorporating a time-series backbone to maintain temporal coherence in generated motions, and utilizes autoregressive in-painting to facilitate online motion inference with minimal latency (0.17 seconds). Collectively, we demonstrate that our system offers a highly effective and robust solution capable of scaling to an extensive dataset of over 200 hours collected in a wide range of complex indoor and outdoor environments using publicly available smart glasses.
Related papers
- ELMO: Enhanced Real-time LiDAR Motion Capture through Upsampling [12.832526520548855]
This paper introduces ELMO, a real-time upsampling motion capture framework designed for a single LiDAR sensor.
Modeled as a conditional autoregressive transformer-based upsampling motion generator, ELMO achieves 60 fps motion capture from a 20 fps LiDAR point cloud sequence.
arXiv Detail & Related papers (2024-10-09T15:02:08Z) - Robust Dual Gaussian Splatting for Immersive Human-centric Volumetric Videos [44.50599475213118]
We present a novel approach, dubbed textitDualGS, for real-time and high-fidelity playback of complex human performance.
Our approach achieves a compression ratio of up to 120 times, only requiring approximately 350KB of storage per frame.
We demonstrate the efficacy of our representation through photo-realistic, free-view experiences on VR headsets.
arXiv Detail & Related papers (2024-09-12T18:33:13Z) - Motion Capture from Inertial and Vision Sensors [60.5190090684795]
MINIONS is a large-scale Motion capture dataset collected from INertial and visION Sensors.
We conduct experiments on multi-modal motion capture using a monocular camera and very few IMUs.
arXiv Detail & Related papers (2024-07-23T09:41:10Z) - EgoGaussian: Dynamic Scene Understanding from Egocentric Video with 3D Gaussian Splatting [95.44545809256473]
EgoGaussian is a method capable of simultaneously reconstructing 3D scenes and dynamically tracking 3D object motion from RGB egocentric input alone.
We show significant improvements in terms of both dynamic object and background reconstruction quality compared to the state-of-the-art.
arXiv Detail & Related papers (2024-06-28T10:39:36Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - Mocap Everyone Everywhere: Lightweight Motion Capture With Smartwatches and a Head-Mounted Camera [10.055317239956423]
We present a lightweight and affordable motion capture method based on two smartwatches and a head-mounted camera.
Our method can make wearable motion capture accessible to everyone everywhere, enabling 3D full-body motion capture in diverse environments.
arXiv Detail & Related papers (2024-01-01T18:56:54Z) - VR-NeRF: High-Fidelity Virtualized Walkable Spaces [55.51127858816994]
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
arXiv Detail & Related papers (2023-11-05T02:03:14Z) - Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking
Inputs with Diffusion Model [18.139630622759636]
We present AGRoL, a novel conditional diffusion model specifically designed to track full bodies given sparse upper-body tracking signals.
Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data.
Unlike common diffusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking applications.
arXiv Detail & Related papers (2023-04-17T19:35:13Z) - Instant-NVR: Instant Neural Volumetric Rendering for Human-object
Interactions from Monocular RGBD Stream [14.844982083586306]
We propose Instant-NVR, a neural approach for instant volumetric human-object tracking and rendering using a single RGBD camera.
In the tracking front-end, we adopt a robust human-object capture scheme to provide sufficient motion priors.
We also provide an on-the-fly reconstruction scheme of the dynamic/static radiance fields via efficient motion-prior searching.
arXiv Detail & Related papers (2023-04-06T16:09:51Z) - Augment Yourself: Mixed Reality Self-Augmentation Using Optical
See-through Head-mounted Displays and Physical Mirrors [49.49841698372575]
Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user.
We propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user.
Our system, to the best of our knowledge the first of its kind, estimates the user's pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather
arXiv Detail & Related papers (2020-07-06T16:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.