High Speed Rotation Estimation with Dynamic Vision Sensors
- URL: http://arxiv.org/abs/2209.02205v1
- Date: Tue, 6 Sep 2022 04:00:46 GMT
- Title: High Speed Rotation Estimation with Dynamic Vision Sensors
- Authors: Guangrong Zhao, Yiran Shen, Ning Chen, Pengfei Hu, Lei Liu, Hongkai
Wen
- Abstract summary: The Relative Mean Absolute Error (RMAE) of EV-Tach is as low as 0.03% which is comparable to the state-of-the-art laser under fixed measurement mode.
EV-Tach is robust to subtle movement of user's hand, therefore, can be used as a handheld device, where the laser fails to produce reasonable results.
- Score: 10.394670846430635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rotational speed is one of the important metrics to be measured for
calibrating the electric motors in manufacturing, monitoring engine during car
repairing, faults detection on electrical appliance and etc. However, existing
measurement techniques either require prohibitive hardware (e.g., high-speed
camera) or are inconvenient to use in real-world application scenarios. In this
paper, we propose, EV-Tach, an event-based tachometer via efficient dynamic
vision sensing on mobile devices. EV-Tach is designed as a high-fidelity and
convenient tachometer by introducing dynamic vision sensor as a new sensing
modality to capture the high-speed rotation precisely under various real-world
scenarios. By designing a series of signal processing algorithms bespoke for
dynamic vision sensing on mobile devices, EV-Tach is able to extract the
rotational speed accurately from the event stream produced by dynamic vision
sensing on rotary targets. According to our extensive evaluations, the Relative
Mean Absolute Error (RMAE) of EV-Tach is as low as 0.03% which is comparable to
the state-of-the-art laser tachometer under fixed measurement mode. Moreover,
EV-Tach is robust to subtle movement of user's hand, therefore, can be used as
a handheld device, where the laser tachometer fails to produce reasonable
results.
Related papers
- Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Predicting Surface Texture in Steel Manufacturing at Speed [81.90215579427463]
Control of the surface texture of steel strip during the galvanizing and temper rolling processes is essential to satisfy customer requirements.
We propose the use of machine learning to improve accuracy of the transformation from inline laser reflection measurements to a prediction of surface properties.
arXiv Detail & Related papers (2023-01-20T12:11:03Z) - Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning [10.086410807283746]
Small deficits in movement are often the first sign of an underlying neurological problem.
We develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
RMT can track the fastest human movement accurately when webcams or laptop cameras are used.
arXiv Detail & Related papers (2023-01-18T22:57:34Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - Visual-Inertial Odometry with Online Calibration of Velocity-Control
Based Kinematic Motion Models [3.42658286826597]
Visual-inertial odometry (VIO) is an important technology for autonomous robots with power and payload constraints.
We propose a novel approach for VIO with stereo cameras which integrates and calibrates the velocity-control based kinematic motion model of wheeled mobile robots online.
arXiv Detail & Related papers (2022-04-14T06:21:12Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - DVIO: Depth aided visual inertial odometry for RGBD sensors [7.745106319694523]
This paper presents a new visual inertial odometry (VIO) system, which uses measurements from a RGBD sensor and an inertial measurement unit (IMU) sensor for estimating the motion state of the mobile device.
The resulting system is called the depth-aided VIO (DVIO) system.
arXiv Detail & Related papers (2021-10-20T22:12:01Z) - Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning [24.999540933593273]
This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
arXiv Detail & Related papers (2021-09-30T10:03:56Z) - MIMC-VINS: A Versatile and Resilient Multi-IMU Multi-Camera
Visual-Inertial Navigation System [44.76768683036822]
We propose a real-time consistent multi-IMU multi-camera (CMU)-VINS estimator for visual-inertial navigation systems.
Within an efficient multi-state constraint filter, the proposed MIMC-VINS algorithm optimally fuses asynchronous measurements from all sensors.
The proposed MIMC-VINS is validated in both Monte-Carlo simulations and real-world experiments.
arXiv Detail & Related papers (2020-06-28T20:16:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.