LaNet: Real-time Lane Identification by Learning Road
SurfaceCharacteristics from Accelerometer Data
- URL: http://arxiv.org/abs/2004.02822v1
- Date: Mon, 6 Apr 2020 17:09:50 GMT
- Title: LaNet: Real-time Lane Identification by Learning Road
SurfaceCharacteristics from Accelerometer Data
- Authors: Madhumitha Harishankar, Jun Han, Sai Vineeth Kalluru Srinivas, Faisal
Alqarni, Shi Su, Shijia Pan, Hae Young Noh, Pei Zhang, Marco Gruteser,
Patrick Tague
- Abstract summary: We develop a deep LSTM neural network model LaNet that determines the lane vehicles are on by periodically classifying accelerometer samples.
LaNet learns lane-specific sequences of road surface events (bumps, cracks etc.) and yields 100% lane classification accuracy with 200 meters of driving data.
- Score: 12.334058883768977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The resolution of GPS measurements, especially in urban areas, is
insufficient for identifying a vehicle's lane. In this work, we develop a deep
LSTM neural network model LaNet that determines the lane vehicles are on by
periodically classifying accelerometer samples collected by vehicles as they
drive in real time. Our key finding is that even adjacent patches of road
surfaces contain characteristics that are sufficiently unique to differentiate
between lanes, i.e., roads inherently exhibit differing bumps, cracks,
potholes, and surface unevenness. Cars can capture this road surface
information as they drive using inexpensive, easy-to-install accelerometers
that increasingly come fitted in cars and can be accessed via the CAN-bus. We
collect an aggregate of 60 km driving data and synthesize more based on this
that capture factors such as variable driving speed, vehicle suspensions, and
accelerometer noise. Our formulated LSTM-based deep learning model, LaNet,
learns lane-specific sequences of road surface events (bumps, cracks etc.) and
yields 100% lane classification accuracy with 200 meters of driving data,
achieving over 90% with just 100 m (correspondingly to roughly one minute of
driving). We design the LaNet model to be practical for use in real-time lane
classification and show with extensive experiments that LaNet yields high
classification accuracy even on smooth roads, on large multi-lane roads, and on
drives with frequent lane changes. Since different road surfaces have different
inherent characteristics or entropy, we excavate our neural network model and
discover a mechanism to easily characterize the achievable classification
accuracies in a road over various driving distances by training the model just
once. We present LaNet as a low-cost, easily deployable and highly accurate way
to achieve fine-grained lane identification.
Related papers
- FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Lane-GNN: Integrating GNN for Predicting Drivers' Lane Change Intention [5.23886447414886]
We apply graph modelling on the traffic flow data generated by a popular mobility simulator, SUMO, at road segment levels.
We then evaluate the performance of lane changing detection using the proposed Lane-GNN scheme.
Our experimental results show that the proposed Lane-GNN can detect drivers' lane change intention within 90 seconds with an accuracy of 99.42%.
arXiv Detail & Related papers (2022-07-02T12:53:56Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - METEOR: A Massive Dense & Heterogeneous Behavior Dataset for Autonomous
Driving [42.69638782267657]
We present a new and complex traffic dataset, METEOR, which captures traffic patterns in unstructured scenarios in India.
METEOR consists of more than 1000 one-minute video clips, over 2 million annotated frames with ego-vehicle trajectories, and more than 13 million bounding boxes for surrounding vehicles or traffic agents.
We use our novel dataset to evaluate the performance of object detection and behavior prediction algorithms.
arXiv Detail & Related papers (2021-09-16T01:01:55Z) - DiGNet: Learning Scalable Self-Driving Policies for Generic Traffic
Scenarios with Graph Neural Networks [26.558394047144006]
We propose a graph-based deep network to achieve scalable self-driving that can handle massive traffic scenarios.
More than 7,000 km of evaluation is conducted in a high-fidelity driving simulator.
Our method can obey the traffic rules and safely navigate the vehicle in a large variety of urban, rural, and highway environments.
arXiv Detail & Related papers (2020-11-13T06:13:28Z) - Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs [72.67604044776662]
We tackle the problem of of-temporal tagging of self-driving scenes from raw sensor data.
Our approach learns a universal embedding for all tags, enabling efficient tagging of many attributes and faster learning of new attributes with limited data.
arXiv Detail & Related papers (2020-11-12T02:18:16Z) - Lane detection in complex scenes based on end-to-end neural network [10.955885950313103]
Lane detection is a key problem to solve the division of derivable areas in unmanned driving.
We propose an end-to-end network to lane detection in a variety of complex scenes.
Our network was tested on the CULane database and its F1-measure with IOU threshold of 0.5 can reach 71.9%.
arXiv Detail & Related papers (2020-10-26T08:46:35Z) - CurveLane-NAS: Unifying Lane-Sensitive Architecture Search and Adaptive
Point Blending [102.98909328368481]
CurveLane-NAS is a novel lane-sensitive architecture search framework.
It captures both long-ranged coherent and accurate short-range curve information.
It unifies both architecture search and post-processing on curve lane predictions via point blending.
arXiv Detail & Related papers (2020-07-23T17:23:26Z) - Where can I drive? A System Approach: Deep Ego Corridor Estimation for
Robust Automated Driving [2.378161932344701]
We propose to classify specifically a drivable corridor of the ego lane on pixel level with a deep learning approach.
Our approach is kept computationally efficient with only 0.66 million parameters allowing its application in large scale products.
arXiv Detail & Related papers (2020-04-16T13:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.