Tightly-coupled Visual-DVL-Inertial Odometry for Robot-based Ice-water
Boundary Exploration
- URL: http://arxiv.org/abs/2303.17005v2
- Date: Wed, 9 Aug 2023 18:00:34 GMT
- Title: Tightly-coupled Visual-DVL-Inertial Odometry for Robot-based Ice-water
Boundary Exploration
- Authors: Lin Zhao, Mingxi Zhou, Brice Loose
- Abstract summary: We present a multi-sensors fusion framework to increase localization accuracy.
Visual images, Doppler Velocity Log (DVL), Inertial Measurement Unit (IMU) and Pressure sensor are integrated.
The proposed method is validated with a data set collected in the field under frozen ice.
- Score: 8.555466536537292
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Robotic underwater systems, e.g., Autonomous Underwater Vehicles (AUVs) and
Remotely Operated Vehicles (ROVs), are promising tools for collecting
biogeochemical data at the ice-water interface for scientific advancements.
However, state estimation, i.e., localization, is a well-known problem for
robotic systems, especially, for the ones that travel underwater. In this
paper, we present a tightly-coupled multi-sensors fusion framework to increase
localization accuracy that is robust to sensor failure. Visual images, Doppler
Velocity Log (DVL), Inertial Measurement Unit (IMU) and Pressure sensor are
integrated into the state-of-art Multi-State Constraint Kalman Filter (MSCKF)
for state estimation. Besides that a new keyframe-based state clone mechanism
and a new DVL-aided feature enhancement are presented to further improve the
localization performance. The proposed method is validated with a data set
collected in the field under frozen ice, and the result is compared with 6
other different sensor fusion setups. Overall, the result with the keyframe
enabled and DVL-aided feature enhancement yields the best performance with a
Root-mean-square error of less than 2 m compared to the ground truth path with
a total traveling distance of about 200 m.
Related papers
- Improving Visual Place Recognition Based Robot Navigation Through Verification of Localization Estimates [14.354164363224529]
This research introduces a novel Multi-Layer Perceptron (MLP) integrity monitor for Visual Place Recognition (VPR)
It demonstrates improved performance and generalizability over the previous state-of-the-art SVM approach.
We test our proposed system in extensive real-world experiments, where we also present two real-time integrity-based VPR verification methods.
arXiv Detail & Related papers (2024-07-11T03:47:14Z) - OptiState: State Estimation of Legged Robots using Gated Networks with Transformer-based Vision and Kalman Filtering [42.817893456964]
State estimation for legged robots is challenging due to their highly dynamic motion and limitations imposed by sensor accuracy.
We propose a hybrid solution that combines proprioception and exteroceptive information for estimating the state of the robot's trunk.
This framework not only furnishes accurate robot state estimates, but can minimize the nonlinear errors that arise from sensor measurements and model simplifications through learning.
arXiv Detail & Related papers (2024-01-30T03:34:25Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - Distributed Variable-Baseline Stereo SLAM from two UAVs [17.513645771137178]
In this article, we employ two UAVs equipped with one monocular camera and one IMU each, to exploit their view overlap and relative distance measurements.
In order to control the glsuav agents autonomously, we propose a decentralized collaborative estimation scheme.
We demonstrate the effectiveness of the approach at high altitude flights of up to 160m, going significantly beyond the capabilities of state-of-the-art VIO methods.
arXiv Detail & Related papers (2020-09-10T12:16:10Z) - Transfer Learning for Motor Imagery Based Brain-Computer Interfaces: A
Complete Pipeline [54.73337667795997]
Transfer learning (TL) has been widely used in motor imagery (MI) based brain-computer interfaces (BCIs) to reduce the calibration effort for a new subject.
This paper proposes that TL could be considered in all three components (spatial filtering, feature engineering, and classification) of MI-based BCIs.
arXiv Detail & Related papers (2020-07-03T23:44:21Z) - MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera
Visual-Inertial Navigation System [44.76768683036822]
We propose a real-time consistent multi-IMU multi-camera (CMU)-VINS estimator for visual-inertial navigation systems.
Within an efficient multi-state constraint filter, the proposed MIMC-VINS algorithm optimally fuses asynchronous measurements from all sensors.
The proposed MIMC-VINS is validated in both Monte-Carlo simulations and real-world experiments.
arXiv Detail & Related papers (2020-06-28T20:16:08Z) - PRGFlow: Benchmarking SWAP-Aware Unified Deep Visual Inertial Odometry [14.077054191270213]
We present a deep learning approach for visual translation estimation and loosely fuse it with an Inertial sensor for full 6DoF odometry estimation.
We evaluate our network on the MSCOCO dataset and evaluate the VI fusion on multiple real-flight trajectories.
arXiv Detail & Related papers (2020-06-11T19:12:54Z) - ASFD: Automatic and Scalable Face Detector [129.82350993748258]
We propose a novel Automatic and Scalable Face Detector (ASFD)
ASFD is based on a combination of neural architecture search techniques as well as a new loss design.
Our ASFD-D6 outperforms the prior strong competitors, and our lightweight ASFD-D0 runs at more than 120 FPS with Mobilenet for VGA-resolution images.
arXiv Detail & Related papers (2020-03-25T06:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.