Robust Perception Architecture Design for Automotive Cyber-Physical
  Systems
        - URL: http://arxiv.org/abs/2205.08067v1
 - Date: Tue, 17 May 2022 03:02:07 GMT
 - Title: Robust Perception Architecture Design for Automotive Cyber-Physical
  Systems
 - Authors: Joydeep Dey, Sudeep Pasricha
 - Abstract summary: PASTA is a framework for global co-optimization of deep learning and sensing for dependable vehicle perception.
We show how PASTA can find robust, vehicle-specific perception architecture solutions.
 - Score: 4.226118870861363
 - License: http://creativecommons.org/licenses/by-nc-nd/4.0/
 - Abstract:   In emerging automotive cyber-physical systems (CPS), accurate environmental
perception is critical to achieving safety and performance goals. Enabling
robust perception for vehicles requires solving multiple complex problems
related to sensor selection/ placement, object detection, and sensor fusion.
Current methods address these problems in isolation, which leads to inefficient
solutions. We present PASTA, a novel framework for global co-optimization of
deep learning and sensing for dependable vehicle perception. Experimental
results with the Audi-TT and BMW-Minicooper vehicles show how PASTA can find
robust, vehicle-specific perception architecture solutions.
 
       
      
        Related papers
        - Research Challenges and Progress in the End-to-End V2X Cooperative   Autonomous Driving Competition [57.698383942708]
Vehicle-to-everything (V2X) communication has emerged as a key enabler for extending perception range and enhancing driving safety.<n>We organized the End-to-End Autonomous Driving through V2X Cooperation Challenge, which features two tracks: cooperative temporal perception and cooperative end-to-end planning.<n>This paper describes the design and outcomes of the challenge, highlights key research problems including bandwidth-aware fusion, robust multi-agent planning, and heterogeneous sensor integration.
arXiv  Detail & Related papers  (2025-07-29T09:06:40Z) - VALISENS: A Validated Innovative Multi-Sensor System for Cooperative   Automated Driving [0.9527960631238174]
This paper presents VALISENS, an innovative multi-sensor system distributed across multiple agents.<n>It integrates onboard and roadside LiDARs, radars, thermal cameras, and RGB cameras to enhance situational awareness and support cooperative automated driving.<n>The proposed system demonstrates the potential of cooperative perception in real-world test environments.
arXiv  Detail & Related papers  (2025-05-11T13:41:37Z) - MSC-Bench: Benchmarking and Analyzing Multi-Sensor Corruption for   Driving Perception [9.575044300747061]
Multi-sensor fusion models play a crucial role in autonomous driving perception, particularly in tasks like 3D object detection and HD map construction.
These models provide essential and comprehensive static environmental information for autonomous driving systems.
While camera-LiDAR fusion methods have shown promising results, they often depend on complete sensor inputs.
This reliance can lead to low robustness and potential failures when sensors are corrupted or missing, raising significant safety concerns.
To tackle this challenge, we introduce the Multi-Sensor Corruption Benchmark (MSC-Bench), the first comprehensive benchmark aimed at evaluating the robustness of multi-sensor autonomous driving perception models against various sensor corruption
arXiv  Detail & Related papers  (2025-01-02T03:38:46Z) - Exploring the Interplay Between Video Generation and World Models in   Autonomous Driving: A Survey [61.39993881402787]
World models and video generation are pivotal technologies in the domain of autonomous driving.
This paper investigates the relationship between these two technologies.
By analyzing the interplay between video generation and world models, this survey identifies critical challenges and future research directions.
arXiv  Detail & Related papers  (2024-11-05T08:58:35Z) - Cooperative Visual-LiDAR Extrinsic Calibration Technology for   Intersection Vehicle-Infrastructure: A review [19.77659610529281]
In the typical urban intersection scenario, both vehicles and infrastructures are equipped with visual and LiDAR sensors.
This paper examines and analyzes the calibration of multi-end camera-LiDAR setups from vehicle, roadside, and vehicle-road cooperation perspectives.
arXiv  Detail & Related papers  (2024-05-16T14:29:56Z) - The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition [136.32656319458158]
The 2024 RoboDrive Challenge was crafted to propel the development of driving perception technologies.
This year's challenge consisted of five distinct tracks and attracted 140 registered teams from 93 institutes across 11 countries.
The competition culminated in 15 top-performing solutions.
arXiv  Detail & Related papers  (2024-05-14T17:59:57Z) - Object Detectors in the Open Environment: Challenges, Solutions, and   Outlook [95.3317059617271]
The dynamic and intricate nature of the open environment poses novel and formidable challenges to object detectors.
This paper aims to conduct a comprehensive review and analysis of object detectors in open environments.
We propose a framework that includes four quadrants (i.e., out-of-domain, out-of-category, robust learning, and incremental learning) based on the dimensions of the data / target changes.
arXiv  Detail & Related papers  (2024-03-24T19:32:39Z) - Framework for Quality Evaluation of Smart Roadside Infrastructure
  Sensors for Automated Driving Applications [2.0502751783060003]
We present a novel approach to perform detailed quality assessment for smart roadside infrastructure sensors.
Our framework is multimodal across different sensor types and is evaluated on the DAIR-V2X dataset.
arXiv  Detail & Related papers  (2023-04-16T10:21:07Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
  Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
 Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv  Detail & Related papers  (2023-03-08T00:48:32Z) - Vehicle-road Cooperative Simulation and 3D Visualization System [0.0]
Vehicle-road collaboration technology can overcome the limits and improve the traffic safety and efficiency.
It requires rigorous testing and verification methods to ensure the reliability and trustworthiness of the technology.
arXiv  Detail & Related papers  (2022-07-14T04:53:54Z) - Robustness Enhancement of Object Detection in Advanced Driver Assistance
  Systems (ADAS) [0.0]
The proposed system includes two main components: (1) a compact one-stage object detector which is expected to be able to perform at a comparable accuracy compared to state-of-the-art object detectors, and (2) an environmental condition detector that helps to send a warning signal to the cloud in case the self-driving car needs human actions due to the significance of the situation.
arXiv  Detail & Related papers  (2021-05-04T15:42:43Z) - Improving Robustness of Learning-based Autonomous Steering Using
  Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv  Detail & Related papers  (2021-02-26T02:08:07Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
  perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv  Detail & Related papers  (2020-07-14T05:25:15Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.