CarSpeedNet: A Deep Neural Network-based Car Speed Estimation from
Smartphone Accelerometer
- URL: http://arxiv.org/abs/2401.07468v1
- Date: Mon, 15 Jan 2024 04:51:34 GMT
- Title: CarSpeedNet: A Deep Neural Network-based Car Speed Estimation from
Smartphone Accelerometer
- Authors: Barak Or
- Abstract summary: CarSpeedNet is introduced to estimate car speed using three-axis accelerometer data from smartphones.
Our trained model demonstrates exceptional accuracy in car speed estimation, achieving a precision of less than 0.72[m/s] during an extended driving test.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, a novel deep neural network (DNN) architecture, CarSpeedNet,
is introduced to estimate car speed using three-axis accelerometer data from
smartphones. Utilizing 13 hours of data collected from smartphones mounted in
vehicles navigating through various regions in Israel, the CarSpeedNet
effectively learns the relationship between measured smartphone acceleration
and car speed. Ground truth speed data was obtained at 1[Hz] from the GPS
receiver in the smartphones. The proposed model enables high-frequency speed
estimation, incorporating historical inputs. Our trained model demonstrates
exceptional accuracy in car speed estimation, achieving a precision of less
than 0.72[m/s] during an extended driving test, solely relying on smartphone
accelerometer data without any connectivity to the car.
Related papers
- Multi-Source Urban Traffic Flow Forecasting with Drone and Loop Detector Data [61.9426776237409]
Drone-captured data can create an accurate multi-sensor mobility observatory for large-scale urban networks.
A simple yet effective graph-based model HiMSNet is proposed to integrate multiple data modalities and learn-temporal correlations.
arXiv Detail & Related papers (2025-01-07T03:23:28Z) - Detecting Car Speed using Object Detection and Depth Estimation: A Deep Learning Framework [0.0]
The tendency to over speeding is usually tried to be controlled using check points at various parts of the road but not all traffic police have the device to check speed with existing speed estimating devices such as LIDAR based, or Radar based guns.
The current project tries to address the issue of vehicle speed estimation with handheld devices such as mobile phones or wearable cameras with network connection to estimate the speed using deep learning frameworks.
arXiv Detail & Related papers (2024-08-08T10:47:02Z) - EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from
Egocentric Video Data [24.32406053197066]
We propose a novel graph convolutional network (GCN) based network, namely, EgoSpeed-Net.
We are motivated by the fact that the position changes of objects over time can provide us very useful clues for forecasting the speed change in future.
We conduct extensive experiments on the Honda Research Institute Driving dataset and demonstrate the superior performance of EgoSpeed-Net.
arXiv Detail & Related papers (2022-09-27T15:25:57Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Learning Car Speed Using Inertial Sensors [0.0]
A deep neural network (DNN) is trained to estimate the speed of a car driving in an urban area.
Three hours of data was collected by driving through the city of Ashdod, Israel in a car equipped with a global navigation satellite system.
The trained model is shown to substantially improve the position accuracy during a 4 minutes drive without the use of position updates.
arXiv Detail & Related papers (2022-05-15T17:46:59Z) - A Machine Learning Smartphone-based Sensing for Driver Behavior
Classification [1.552282932199974]
We propose to collect data sensors available in smartphones (Accelerometer, Gyroscope, GPS) in order to classify the driver behavior using speed, acceleration, direction, the 3-axis rotation angles (Yaw, Pitch, Roll)
Secondly, after fusing inter-axial data from multiple sensors into a single file, we explore different machine learning algorithms for time series classification to evaluate which algorithm results in the highest performance.
arXiv Detail & Related papers (2022-02-01T10:12:36Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - Achieving Real-Time Object Detection on MobileDevices with Neural
Pruning Search [45.20331644857981]
We propose a compiler-aware neural pruning search framework to achieve high-speed inference on autonomous vehicles for 2D and 3D object detection.
For the first time, the proposed method achieves computation (close-to) real-time, 55ms and 99ms inference times for YOLOv4 based 2D object detection and PointPillars based 3D detection.
arXiv Detail & Related papers (2021-06-28T18:59:20Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Driver2vec: Driver Identification from Automotive Data [44.84876493736275]
Driver2vec is able to accurately identify the driver from a short 10-second interval of sensor data.
Driver2vec is trained on a dataset of 51 drivers provided by Nervtech.
arXiv Detail & Related papers (2021-02-10T03:09:13Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - LaNet: Real-time Lane Identification by Learning Road
SurfaceCharacteristics from Accelerometer Data [12.334058883768977]
We develop a deep LSTM neural network model LaNet that determines the lane vehicles are on by periodically classifying accelerometer samples.
LaNet learns lane-specific sequences of road surface events (bumps, cracks etc.) and yields 100% lane classification accuracy with 200 meters of driving data.
arXiv Detail & Related papers (2020-04-06T17:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.