Instant 3D Object Tracking with Applications in Augmented Reality
- URL: http://arxiv.org/abs/2006.13194v1
- Date: Tue, 23 Jun 2020 17:48:29 GMT
- Title: Instant 3D Object Tracking with Applications in Augmented Reality
- Authors: Adel Ahmadyan, Tingbo Hou, Jianing Wei, Liangkai Zhang, Artsiom
Ablavatski, Matthias Grundmann
- Abstract summary: Tracking object poses in 3D is a crucial building block for Augmented Reality applications.
We propose an instant motion tracking system that tracks an object's pose in space in real-time on mobile devices.
- Score: 4.893345190925178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking object poses in 3D is a crucial building block for Augmented Reality
applications. We propose an instant motion tracking system that tracks an
object's pose in space (represented by its 3D bounding box) in real-time on
mobile devices. Our system does not require any prior sensory calibration or
initialization to function. We employ a deep neural network to detect objects
and estimate their initial 3D pose. Then the estimated pose is tracked using a
robust planar tracker. Our tracker is capable of performing relative-scale
9-DoF tracking in real-time on mobile devices. By combining use of CPU and GPU
efficiently, we achieve 26-FPS+ performance on mobile devices.
Related papers
- Long-Term 3D Point Tracking By Cost Volume Fusion [2.3411633024711573]
We propose the first deep learning framework for long-term point tracking in 3D that generalizes to new points and videos without requiring test-time fine-tuning.
Our model integrates multiple past appearances and motion information via a transformer architecture, significantly enhancing overall tracking performance.
arXiv Detail & Related papers (2024-07-18T09:34:47Z) - Delving into Motion-Aware Matching for Monocular 3D Object Tracking [81.68608983602581]
We find that the motion cue of objects along different time frames is critical in 3D multi-object tracking.
We propose MoMA-M3T, a framework that mainly consists of three motion-aware components.
We conduct extensive experiments on the nuScenes and KITTI datasets to demonstrate our MoMA-M3T achieves competitive performance against state-of-the-art methods.
arXiv Detail & Related papers (2023-08-22T17:53:58Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [68.8204255655161]
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
arXiv Detail & Related papers (2021-04-29T22:30:29Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion
Forecasting with a Single Convolutional Net [93.51773847125014]
We propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor.
Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world.
arXiv Detail & Related papers (2020-12-22T22:43:35Z) - Kinematic 3D Object Detection in Monocular Video [123.7119180923524]
We propose a novel method for monocular video-based 3D object detection which carefully leverages kinematic motion to improve precision of 3D localization.
We achieve state-of-the-art performance on monocular 3D object detection and the Bird's Eye View tasks within the KITTI self-driving dataset.
arXiv Detail & Related papers (2020-07-19T01:15:12Z) - RUHSNet: 3D Object Detection Using Lidar Data in Real Time [0.0]
We propose a novel neural network architecture for detecting 3D objects in point cloud data.
Our work surpasses the state of the art in this domain both in terms of average precision and speed running at > 30 FPS.
This makes it a feasible option to be deployed in real time applications including self driving cars.
arXiv Detail & Related papers (2020-05-09T09:41:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.