AutoExp: A multidisciplinary, multi-sensor framework to evaluate human
activities in self-driving cars
- URL: http://arxiv.org/abs/2306.03115v1
- Date: Mon, 5 Jun 2023 13:13:19 GMT
- Title: AutoExp: A multidisciplinary, multi-sensor framework to evaluate human
activities in self-driving cars
- Authors: Carlos Crispim-Junior, Romain Guesdon, Christophe Jallais, Florent
Laroche, Stephanie Souche-Le Corvec, Laure Tougne Rodet
- Abstract summary: This paper proposes an experimental framework to study the activities of occupants of self-driving cars.
The framework is composed of an experimentation scenario, and a data acquisition module.
We seek firstly to capture real-world data about the usage of the vehicle in the nearest possible, real-world conditions, and secondly to create a dataset containing in-cabin human activities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adoption of self-driving cars will certainly revolutionize our lives,
even though they may take more time to become fully autonomous than initially
predicted. The first vehicles are already present in certain cities of the
world, as part of experimental robot-taxi services. However, most existing
studies focus on the navigation part of such vehicles. We currently miss
methods, datasets, and studies to assess the in-cabin human component of the
adoption of such technology in real-world conditions. This paper proposes an
experimental framework to study the activities of occupants of self-driving
cars using a multidisciplinary approach (computer vision associated with human
and social sciences), particularly non-driving related activities. The
framework is composed of an experimentation scenario, and a data acquisition
module. We seek firstly to capture real-world data about the usage of the
vehicle in the nearest possible, real-world conditions, and secondly to create
a dataset containing in-cabin human activities to foster the development and
evaluation of computer vision algorithms. The acquisition module records
multiple views of the front seats of the vehicle (Intel RGB-D and GoPro
cameras); in addition to survey data about the internal states and attitudes of
participants towards this type of vehicle before, during, and after the
experimentation. We evaluated the proposed framework with the realization of
real-world experimentation with 30 participants (1 hour each) to study the
acceptance of SDCs of SAE level 4.
Related papers
- Pedestrian motion prediction evaluation for urban autonomous driving [0.0]
We analyze selected publications with provided open-source solutions to determine valuability of traditional motion prediction metrics.
This perspective should be valuable to any potential autonomous driving or robotics engineer looking for the real-world performance of the existing state-of-art pedestrian motion prediction problem.
arXiv Detail & Related papers (2024-10-22T10:06:50Z) - Open-sourced Data Ecosystem in Autonomous Driving: the Present and Future [130.87142103774752]
This review systematically assesses over seventy open-source autonomous driving datasets.
It offers insights into various aspects, such as the principles underlying the creation of high-quality datasets.
It also delves into the scientific and technical challenges that warrant resolution.
arXiv Detail & Related papers (2023-12-06T10:46:53Z) - Analyze Drivers' Intervention Behavior During Autonomous Driving -- A
VR-incorporated Approach [2.7532019227694344]
This work sheds light on understanding human drivers' intervention behavior involved in the operation of autonomous vehicles.
Experiment environments were implemented where the virtual reality (VR) and traffic micro-simulation are integrated.
Performance indicators such as the probability of intervention, accident rates are defined and used to quantify and compare the risk levels.
arXiv Detail & Related papers (2023-12-04T06:36:57Z) - Driving into the Future: Multiview Visual Forecasting and Planning with
World Model for Autonomous Driving [56.381918362410175]
Drive-WM is the first driving world model compatible with existing end-to-end planning models.
Our model generates high-fidelity multiview videos in driving scenes.
arXiv Detail & Related papers (2023-11-29T18:59:47Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - The NEOLIX Open Dataset for Autonomous Driving [1.4091801425319965]
We present the NEOLIX dataset and its applica-tions in the autonomous driving area.
Our dataset includes about 30,000 frames with point cloud la-bels, and more than 600k 3D bounding boxes withannotations.
arXiv Detail & Related papers (2020-11-27T02:27:39Z) - Autonomous Driving with Deep Learning: A Survey of State-of-Art
Technologies [12.775642557933908]
This is a survey of autonomous driving technologies with deep learning methods.
We investigate the major fields of self-driving systems, such as perception, mapping and localization, prediction, planning and control, simulation, V2X and safety etc.
arXiv Detail & Related papers (2020-06-10T22:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.