Movement Tracking by Optical Flow Assisted Inertial Navigation
- URL: http://arxiv.org/abs/2006.13856v1
- Date: Wed, 24 Jun 2020 16:36:13 GMT
- Title: Movement Tracking by Optical Flow Assisted Inertial Navigation
- Authors: Lassi Meronen, William J. Wilkinson, Arno Solin
- Abstract summary: We show how a learning-based optical flow model can be combined with conventional inertial navigation.
We show how ideas from probabilistic deep learning can aid the robustness of the measurement updates.
The practical applicability is demonstrated on real-world data acquired by an iPad.
- Score: 18.67291804847956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust and accurate six degree-of-freedom tracking on portable devices
remains a challenging problem, especially on small hand-held devices such as
smartphones. For improved robustness and accuracy, complementary movement
information from an IMU and a camera is often fused. Conventional
visual-inertial methods fuse information from IMUs with a sparse cloud of
feature points tracked by the device camera. We consider a visually dense
approach, where the IMU data is fused with the dense optical flow field
estimated from the camera data. Learning-based methods applied to the full
image frames can leverage visual cues and global consistency of the flow field
to improve the flow estimates. We show how a learning-based optical flow model
can be combined with conventional inertial navigation, and how ideas from
probabilistic deep learning can aid the robustness of the measurement updates.
The practical applicability is demonstrated on real-world data acquired by an
iPad in a challenging low-texture environment.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - GelFlow: Self-supervised Learning of Optical Flow for Vision-Based
Tactile Sensor Displacement Measurement [23.63445828014235]
This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors.
We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods.
arXiv Detail & Related papers (2023-09-13T05:48:35Z) - FEDORA: Flying Event Dataset fOr Reactive behAvior [9.470870778715689]
Event-based sensors have emerged as low latency and low energy alternatives to standard frame-based cameras for capturing high-speed motion.
We present Flying Event dataset fOr Reactive behAviour (FEDORA) - a fully synthetic dataset for perception tasks.
arXiv Detail & Related papers (2023-05-22T22:59:05Z) - EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention [5.144653418944836]
Visual-inertial odometry (VIO) algorithms exploit the information from camera and inertial sensors to estimate position and translation.
Recent deep learning based VIO models attract attentions as they provide pose information in a data-driven way.
We propose a novel learning based VIO framework with external memory attention that effectively and efficiently combines visual and inertial features for states estimation.
arXiv Detail & Related papers (2022-09-18T07:05:36Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - FAITH: Fast iterative half-plane focus of expansion estimation using
event-based optic flow [3.326320568999945]
This study proposes the FAst ITerative Half-plane (FAITH) method to determine the course of a micro air vehicle (MAV)
Results show that the computational efficiency of our solution outperforms state-of-the-art methods while keeping a high level of accuracy.
arXiv Detail & Related papers (2021-02-25T12:49:02Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.