Gaze-Sensing LEDs for Head Mounted Displays
- URL: http://arxiv.org/abs/2003.08499v1
- Date: Wed, 18 Mar 2020 23:03:06 GMT
- Title: Gaze-Sensing LEDs for Head Mounted Displays
- Authors: Kaan Ak\c{s}it, Jan Kautz, David Luebke
- Abstract summary: We exploit the sensing capability of LEDs to create low-power gaze tracker for virtual reality (VR) applications.
We show that our gaze estimation method does not require complex dimension reduction techniques.
- Score: 73.88424800314634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new gaze tracker for Head Mounted Displays (HMDs). We modify
two off-the-shelf HMDs to be gaze-aware using Light Emitting Diodes (LEDs). Our
key contribution is to exploit the sensing capability of LEDs to create
low-power gaze tracker for virtual reality (VR) applications. This yields a
simple approach using minimal hardware to achieve good accuracy and low latency
using light-weight supervised Gaussian Process Regression (GPR) running on a
mobile device. With our hardware, we show that Minkowski distance measure based
GPR implementation outperforms the commonly used radial basis function-based
support vector regression (SVR) without the need to precisely determine free
parameters. We show that our gaze estimation method does not require complex
dimension reduction techniques, feature extraction, or distortion corrections
due to off-axis optical paths. We demonstrate two complete HMD prototypes with
a sample eye-tracked application, and report on a series of subjective tests
using our prototypes.
Related papers
- Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - HMD-Poser: On-Device Real-time Human Motion Tracking from Scalable
Sparse Observations [28.452132601844717]
We propose HMD-Poser, the first unified approach to recover full-body motions using scalable sparse observations from HMD and body-worn IMUs.
A lightweight temporal-spatial feature learning network is proposed in HMD-Poser to guarantee that the model runs in real-time on HMDs.
Extensive experimental results on the challenging AMASS dataset show that HMD-Poser achieves new state-of-the-art results in both accuracy and real-time performance.
arXiv Detail & Related papers (2024-03-06T09:10:36Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [62.13932669494098]
This paper presents a 3D Gaussian Inverse Rendering (GIR) method, employing 3D Gaussian representations to factorize the scene into material properties, light, and geometry.
We compute the normal of each 3D Gaussian using the shortest eigenvector, with a directional masking scheme forcing accurate normal estimation without external supervision.
We adopt an efficient voxel-based indirect illumination tracing scheme that stores direction-aware outgoing radiance in each 3D Gaussian to disentangle secondary illumination for approximating multi-bounce light transport.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - DiffusionPoser: Real-time Human Motion Reconstruction From Arbitrary Sparse Sensors Using Autoregressive Diffusion [10.439802168557513]
Motion capture from a limited number of body-worn sensors has important applications in health, human performance, and entertainment.
Recent work has focused on accurately reconstructing whole-body motion from a specific sensor configuration using six IMUs.
We propose a single diffusion model, DiffusionPoser, which reconstructs human motion in real-time from an arbitrary combination of sensors.
arXiv Detail & Related papers (2023-08-31T12:36:50Z) - Slippage-robust Gaze Tracking for Near-eye Display [14.038708833057534]
slippage of head-mounted devices (HMD) often results higher gaze tracking errors.
We propose a slippage-robust gaze tracking for near-eye display method based on the aspheric eyeball model.
arXiv Detail & Related papers (2022-10-20T23:47:56Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - LGC-Net: A Lightweight Gyroscope Calibration Network for Efficient
Attitude Estimation [10.468378902106613]
We present a calibration neural network model for denoising low-cost microelectromechanical system (MEMS) gyroscope and estimating the attitude of a robot in real-time.
Key idea is extracting local and global features from the time window of inertial measurement units (IMU) measurements to regress the output compensation components for the gyroscope dynamically.
The proposed algorithm is evaluated in the EuRoC and TUM-VI datasets and achieves state-of-the-art on the (unseen) test sequences with a more lightweight model structure.
arXiv Detail & Related papers (2022-09-19T08:03:03Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - DUT-LFSaliency: Versatile Dataset and Light Field-to-RGB Saliency
Detection [104.50425501764806]
We introduce a large-scale dataset to enable versatile applications for light field saliency detection.
We present an asymmetrical two-stream model consisting of the Focal stream and RGB stream.
Experiments demonstrate that our Focal stream achieves state-of-the-arts performance.
arXiv Detail & Related papers (2020-12-30T11:53:27Z) - LE-HGR: A Lightweight and Efficient RGB-based Online Gesture Recognition
Network for Embedded AR Devices [8.509059894058947]
We propose a lightweight and computationally efficient HGR framework, namely LE-HGR, to enable real-time gesture recognition on embedded devices with low computing power.
We show that the proposed method is of high accuracy and robustness, which is able to reach high-end performance in a variety of complicated interaction environments.
arXiv Detail & Related papers (2020-01-16T05:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.