Deep Learning for Inertial Sensor Alignment
- URL: http://arxiv.org/abs/2212.11120v2
- Date: Wed, 10 Apr 2024 17:15:23 GMT
- Title: Deep Learning for Inertial Sensor Alignment
- Authors: Maxim Freydin, Niv Sfaradi, Nimrod Segol, Areej Eweida, Barak Or,
- Abstract summary: We propose a data-driven approach to learn the yaw mounting angle of a smartphone equipped with an inertial measurement unit (IMU) and strapped to a car.
The proposed model uses only the accelerometer and gyroscope readings from an IMU as input.
The trained model is deployed on an Android device and evaluated in real-time to test the accuracy of the estimated yaw mounting angle.
- Score: 1.9773109138840514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate alignment of a fixed mobile device equipped with inertial sensors inside a moving vehicle is important for navigation, activity recognition, and other applications. Accurate estimation of the device mounting angle is required to rotate the inertial measurement from the sensor frame to the moving platform frame to standardize measurements and improve the performance of the target task. In this work, a data-driven approach using deep neural networks (DNNs) is proposed to learn the yaw mounting angle of a smartphone equipped with an inertial measurement unit (IMU) and strapped to a car. The proposed model uses only the accelerometer and gyroscope readings from an IMU as input and, in contrast to existing solutions, does not require global position inputs from global navigation satellite systems (GNSS). To train the model in a supervised manner, IMU data is collected for training and validation with the sensor mounted at a known yaw mounting angle, and a range of ground truth labels is generated by applying a random rotation in a bounded range to the measurements. The trained model is tested on data with real rotations showing similar performance as with synthetic rotations. The trained model is deployed on an Android device and evaluated in real-time to test the accuracy of the estimated yaw mounting angle. The model is shown to find the mounting angle at an accuracy of 8 degrees within 5 seconds, and 4 degrees within 27 seconds. An experiment is conducted to compare the proposed model with an existing off-the-shelf solution.
Related papers
- Improved LiDAR Odometry and Mapping using Deep Semantic Segmentation and
Novel Outliers Detection [1.0334138809056097]
We propose a novel framework for real-time LiDAR odometry and mapping based on LOAM architecture for fast moving platforms.
Our framework utilizes semantic information produced by a deep learning model to improve point-to-line and point-to-plane matching.
We study the effect of improving the matching process on the robustness of LiDAR odometry against high speed motion.
arXiv Detail & Related papers (2024-03-05T16:53:24Z) - Angle Robustness Unmanned Aerial Vehicle Navigation in GNSS-Denied
Scenarios [66.05091704671503]
We present a novel angle navigation paradigm to deal with flight deviation in point-to-point navigation tasks.
We also propose a model that includes the Adaptive Feature Enhance Module, Cross-knowledge Attention-guided Module and Robust Task-oriented Head Module.
arXiv Detail & Related papers (2024-02-04T08:41:20Z) - DoorINet: A Deep-Learning Inertial Framework for Door-Mounted IoT
Applications [2.915868985330569]
We propose DoorINet, an end-to-end deep-learning framework to calculate the heading angle from door-mounted, low-cost inertial sensors without using magnetometers.
We record a unique dataset containing 391 minutes of accelerometer and gyroscope measurements and corresponding ground-truth heading angle.
arXiv Detail & Related papers (2024-01-24T05:28:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - ARS-DETR: Aspect Ratio-Sensitive Detection Transformer for Aerial Oriented Object Detection [55.291579862817656]
Existing oriented object detection methods commonly use metric AP$_50$ to measure the performance of the model.
We argue that AP$_50$ is inherently unsuitable for oriented object detection due to its large tolerance in angle deviation.
We propose an Aspect Ratio Sensitive Oriented Object Detector with Transformer, termed ARS-DETR, which exhibits a competitive performance.
arXiv Detail & Related papers (2023-03-09T02:20:56Z) - Support Vector Machine for Determining Euler Angles in an Inertial
Navigation System [55.41644538483948]
The paper discusses the improvement of the accuracy of an inertial navigation system created on the basis of MEMS sensors using machine learning (ML) methods.
The proposed algorithm based on MO has demonstrated its ability to correctly classify in the presence of noise typical for MEMS sensors.
arXiv Detail & Related papers (2022-12-07T10:01:11Z) - Learning Car Speed Using Inertial Sensors [0.0]
A deep neural network (DNN) is trained to estimate the speed of a car driving in an urban area.
Three hours of data was collected by driving through the city of Ashdod, Israel in a car equipped with a global navigation satellite system.
The trained model is shown to substantially improve the position accuracy during a 4 minutes drive without the use of position updates.
arXiv Detail & Related papers (2022-05-15T17:46:59Z) - IDOL: Inertial Deep Orientation-Estimation and Localization [18.118289074111946]
Many smartphone applications use inertial measurement units (IMUs) to sense movement, but the use of these sensors for pedestrian localization can be challenging.
Recent data-driven inertial odometry approaches have demonstrated the increasing feasibility of inertial navigation.
We present a two-stage, data-driven pipeline using a commodity smartphone that first estimates device orientations and then estimates device position.
arXiv Detail & Related papers (2021-02-08T06:41:47Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Accurate Alignment Inspection System for Low-resolution Automotive and
Mobility LiDAR [125.41260574344933]
An accurate inspection system is proposed for estimating a LiDAR alignment error after sensor attachment on a mobility system such as a vehicle or robot.
The proposed method uses only a single target board at the fixed position to estimate the three orientations (roll, tilt, and yaw) and the horizontal position of the LiDAR attachment with sub-degree and millimeter level accuracy.
arXiv Detail & Related papers (2020-08-24T17:47:59Z) - Real-Time Point Cloud Fusion of Multi-LiDAR Infrastructure Sensor Setups
with Unknown Spatial Location and Orientation [0.0]
We present an algorithm that is completely detached from external assistance and runs fully automatically.
Our method focuses on the high-precision fusion of LiDAR point clouds.
Experiments in simulation as well as with real measurements have shown that our algorithm performs a continuous point cloud registration of up to four 64-layer LiDARs in real-time.
arXiv Detail & Related papers (2020-07-28T08:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.