Amirkabir campus dataset: Real-world challenges and scenarios of Visual
Inertial Odometry (VIO) for visually impaired people
- URL: http://arxiv.org/abs/2401.03604v1
- Date: Sun, 7 Jan 2024 23:13:51 GMT
- Title: Amirkabir campus dataset: Real-world challenges and scenarios of Visual
Inertial Odometry (VIO) for visually impaired people
- Authors: Ali Samadzadeh, Mohammad Hassan Mojab, Heydar Soudani, Seyed
Hesamoddin Mireshghollah, Ahmad Nickabadi
- Abstract summary: We introduce the Amirkabir campus dataset (AUT-VI) to address the mentioned problem and improve the navigation systems.
AUT-VI is a novel and super-challenging dataset with 126 diverse sequences in 17 different locations.
In support of ongoing development efforts, we have released the Android application for data capture to the public.
- Score: 3.7998592843098336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual Inertial Odometry (VIO) algorithms estimate the accurate camera
trajectory by using camera and Inertial Measurement Unit (IMU) sensors. The
applications of VIO span a diverse range, including augmented reality and
indoor navigation. VIO algorithms hold the potential to facilitate navigation
for visually impaired individuals in both indoor and outdoor settings.
Nevertheless, state-of-the-art VIO algorithms encounter substantial challenges
in dynamic environments, particularly in densely populated corridors. Existing
VIO datasets, e.g., ADVIO, typically fail to effectively exploit these
challenges. In this paper, we introduce the Amirkabir campus dataset (AUT-VI)
to address the mentioned problem and improve the navigation systems. AUT-VI is
a novel and super-challenging dataset with 126 diverse sequences in 17
different locations. This dataset contains dynamic objects, challenging
loop-closure/map-reuse, different lighting conditions, reflections, and sudden
camera movements to cover all extreme navigation scenarios. Moreover, in
support of ongoing development efforts, we have released the Android
application for data capture to the public. This allows fellow researchers to
easily capture their customized VIO dataset variations. In addition, we
evaluate state-of-the-art Visual Inertial Odometry (VIO) and Visual Odometry
(VO) methods on our dataset, emphasizing the essential need for this
challenging dataset.
Related papers
- InCrowd-VI: A Realistic Visual-Inertial Dataset for Evaluating SLAM in Indoor Pedestrian-Rich Spaces for Human Navigation [2.184775414778289]
We introduce InCrowd-VI, a novel visual-inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments.
InCrowd-VI features 58 sequences totaling a 5 km trajectory length and 1.5 hours of recording time, including RGB, stereo images, and IMU measurements.
Ground-truth trajectories, accurate to approximately 2 cm, are provided in the dataset, originating from the Meta Aria project machine perception SLAM service.
arXiv Detail & Related papers (2024-11-21T17:58:07Z) - DOZE: A Dataset for Open-Vocabulary Zero-Shot Object Navigation in Dynamic Environments [28.23284296418962]
Zero-Shot Object Navigation (ZSON) requires agents to autonomously locate and approach unseen objects in unfamiliar environments.
Existing datasets for developing ZSON algorithms lack consideration of dynamic obstacles, object diversity, and scene texts.
We propose a dataset for Open-Vocabulary Zero-Shot Object Navigation in Dynamic Environments (DOZE)
DOZE comprises ten high-fidelity 3D scenes with over 18k tasks, aiming to mimic complex, dynamic real-world scenarios.
arXiv Detail & Related papers (2024-02-29T10:03:57Z) - RD-VIO: Robust Visual-Inertial Odometry for Mobile Augmented Reality in
Dynamic Environments [55.864869961717424]
It is typically challenging for visual or visual-inertial odometry systems to handle the problems of dynamic scenes and pure rotation.
We design a novel visual-inertial odometry (VIO) system called RD-VIO to handle both of these problems.
arXiv Detail & Related papers (2023-10-23T16:30:39Z) - Improving Underwater Visual Tracking With a Large Scale Dataset and
Image Enhancement [70.2429155741593]
This paper presents a new dataset and general tracker enhancement method for Underwater Visual Object Tracking (UVOT)
It poses distinct challenges; the underwater environment exhibits non-uniform lighting conditions, low visibility, lack of sharpness, low contrast, camouflage, and reflections from suspended particles.
We propose a novel underwater image enhancement algorithm designed specifically to boost tracking quality.
The method has resulted in a significant performance improvement, of up to 5.0% AUC, of state-of-the-art (SOTA) visual trackers.
arXiv Detail & Related papers (2023-08-30T07:41:26Z) - FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets [8.830479021890575]
We have collected underwater forward-looking stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
These datasets are critical for the development of several underwater applications, including obstacle avoidance, visual odometry, 3D tracking, Simultaneous localization and Mapping (SLAM) and depth estimation.
arXiv Detail & Related papers (2023-02-24T17:39:53Z) - EMA-VIO: Deep Visual-Inertial Odometry with External Memory Attention [5.144653418944836]
Visual-inertial odometry (VIO) algorithms exploit the information from camera and inertial sensors to estimate position and translation.
Recent deep learning based VIO models attract attentions as they provide pose information in a data-driven way.
We propose a novel learning based VIO framework with external memory attention that effectively and efficiently combines visual and inertial features for states estimation.
arXiv Detail & Related papers (2022-09-18T07:05:36Z) - VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments [49.82314641876602]
We present a new dataset named VPAIR.
The dataset was recorded on board a light aircraft flying at an altitude of more than 300 meters above ground.
The dataset covers a more than one hundred kilometers long trajectory over various types of challenging landscapes.
arXiv Detail & Related papers (2022-05-23T18:50:08Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - A Flow Base Bi-path Network for Cross-scene Video Crowd Understanding in
Aerial View [93.23947591795897]
In this paper, we strive to tackle the challenges and automatically understand the crowd from the visual data collected from drones.
To alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed.
To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV)
arXiv Detail & Related papers (2020-09-29T01:48:24Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.