Detection of E-scooter Riders in Naturalistic Scenes
- URL: http://arxiv.org/abs/2111.14060v1
- Date: Sun, 28 Nov 2021 05:59:36 GMT
- Title: Detection of E-scooter Riders in Naturalistic Scenes
- Authors: Kumar Apurv, Renran Tian, Rini Sherony
- Abstract summary: This paper presents a novel vision-based system to differentiate between e-scooter riders and regular pedestrians.
We fine-tune MobileNetV2 over our dataset and train the model to classify e-scooter riders and pedestrians.
The classification accuracy of trained MobileNetV2 on top of YOLOv3 is over 91%, with precision and recall over 0.9.
- Score: 2.1270496914042987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: E-scooters have become ubiquitous vehicles in major cities around the
world.The numbers of e-scooters keep escalating, increasing their interactions
with other cars on the road. Normal behavior of an e-scooter rider varies
enormously to other vulnerable road users. This situation creates new
challenges for vehicle active safety systems and automated driving
functionalities, which require the detection of e-scooter riders as the first
step. To our best knowledge, there is no existing computer vision model to
detect these e-scooter riders. This paper presents a novel vision-based system
to differentiate between e-scooter riders and regular pedestrians and a
benchmark data set for e-scooter riders in natural scenes. We propose an
efficient pipeline built over two existing state-of-the-art convolutional
neural networks (CNN), You Only Look Once (YOLOv3) and MobileNetV2. We
fine-tune MobileNetV2 over our dataset and train the model to classify
e-scooter riders and pedestrians. We obtain a recall of around 0.75 on our raw
test sample to classify e-scooter riders with the whole pipeline. Moreover, the
classification accuracy of trained MobileNetV2 on top of YOLOv3 is over 91%,
with precision and recall over 0.9.
Related papers
- Performance Evaluation of Real-Time Object Detection for Electric Scooters [9.218359701264797]
Electric scooters (e-scooters) have rapidly emerged as a popular mode of transportation in urban areas, yet they pose significant safety challenges.
This paper assesses the effectiveness and efficiency of cutting-edge object detectors designed for e-scooters.
The detection accuracy, measured in terms of mAP@0.5, ranges from 27.4% (YOLOv7-E6E) to 86.8% (YOLOv5s)
arXiv Detail & Related papers (2024-05-05T20:00:22Z) - Risk assessment and mitigation of e-scooter crashes with naturalistic
driving data [2.862606936691229]
This paper presents a naturalistic driving study with a focus on e-scooter and vehicle encounters.
The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling.
arXiv Detail & Related papers (2022-12-24T05:28:31Z) - A Wearable Data Collection System for Studying Micro-Level E-Scooter
Behavior in Naturalistic Road Environment [3.5466525046297264]
This paper proposes a wearable data collection system for investigating the micro-level e-Scooter motion behavior in a Naturalistic road environment.
An e-Scooter-based data acquisition system has been developed by integrating LiDAR, cameras, and GPS using the robot operating system (ROS)
arXiv Detail & Related papers (2022-12-22T18:58:54Z) - E-Scooter Rider Detection and Classification in Dense Urban Environments [5.606792370296115]
This research introduces a novel benchmark for partially occluded e-scooter rider detection to facilitate the objective characterization of detection models.
A novel, occlusion-aware method of e-scooter rider detection is presented that achieves a 15.93% improvement in detection performance over the current state of the art.
arXiv Detail & Related papers (2022-05-20T13:50:36Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - CODA: A Real-World Road Corner Case Dataset for Object Detection in
Autonomous Driving [117.87070488537334]
We introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors.
The performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR.
We experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA.
arXiv Detail & Related papers (2022-03-15T08:32:56Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Micromobility in Smart Cities: A Closer Look at Shared Dockless
E-Scooters via Big Social Data [6.001713653976455]
Dockless electric scooters (e-scooters) have emerged as a daily alternative to driving for short-distance commuters in large cities.
E-scooters come with challenges in city management, such as traffic rules, public safety, parking regulations, and liability issues.
This paper is the first large-scale systematic study on shared e-scooters using big social data.
arXiv Detail & Related papers (2020-10-28T19:59:45Z) - Learning Accurate and Human-Like Driving using Semantic Maps and
Attention [152.48143666881418]
This paper investigates how end-to-end driving models can be improved to drive more accurately and human-like.
We exploit semantic and visual maps from HERE Technologies and augment the existing Drive360 dataset with such.
Our models are trained and evaluated on the Drive360 + HERE dataset, which features 60 hours and 3000 km of real-world driving data.
arXiv Detail & Related papers (2020-07-10T22:25:27Z) - VehicleNet: Learning Robust Visual Representation for Vehicle
Re-identification [116.1587709521173]
We propose to build a large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets.
We design a simple yet effective two-stage progressive approach to learning more robust visual representation from VehicleNet.
We achieve the state-of-art accuracy of 86.07% mAP on the private test set of AICity Challenge.
arXiv Detail & Related papers (2020-04-14T05:06:38Z) - Learning by Cheating [72.9701333689606]
We show that this challenging learning problem can be simplified by decomposing it into two stages.
We use the presented approach to train a vision-based autonomous driving system that substantially outperforms the state of the art.
Our approach achieves, for the first time, 100% success rate on all tasks in the original CARLA benchmark, sets a new record on the NoCrash benchmark, and reduces the frequency of infractions by an order of magnitude compared to the prior state of the art.
arXiv Detail & Related papers (2019-12-27T18:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.