ICPR 2024 Competition on Safe Segmentation of Drive Scenes in Unstructured Traffic and Adverse Weather Conditions
- URL: http://arxiv.org/abs/2409.05327v1
- Date: Mon, 9 Sep 2024 04:42:57 GMT
- Title: ICPR 2024 Competition on Safe Segmentation of Drive Scenes in Unstructured Traffic and Adverse Weather Conditions
- Authors: Furqan Ahmed Shaik, Sandeep Nagar, Aiswarya Maturi, Harshit Kumar Sankhla, Dibyendu Ghosh, Anshuman Majumdar, Srikanth Vidapanakal, Kunal Chaudhary, Sunny Manchanda, Girish Varma,
- Abstract summary: The ICPR 2024 Competition on Safe of Drive Scenes in Unstructured Traffic and Adverse Weather Conditions served as a rigorous platform to evaluate and benchmark state-of-the-art semantic segmentation models.
A key aspect of the competition was the use and improvement of the Safe mean Intersection over Union (Safe mIoU) metric.
The results of the competition set new benchmarks in the domain, highlighting the critical role of safety in deploying autonomous vehicles in real-world scenarios.
- Score: 1.4874918394223613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ICPR 2024 Competition on Safe Segmentation of Drive Scenes in Unstructured Traffic and Adverse Weather Conditions served as a rigorous platform to evaluate and benchmark state-of-the-art semantic segmentation models under challenging conditions for autonomous driving. Over several months, participants were provided with the IDD-AW dataset, consisting of 5000 high-quality RGB-NIR image pairs, each annotated at the pixel level and captured under adverse weather conditions such as rain, fog, low light, and snow. A key aspect of the competition was the use and improvement of the Safe mean Intersection over Union (Safe mIoU) metric, designed to penalize unsafe incorrect predictions that could be overlooked by traditional mIoU. This innovative metric emphasized the importance of safety in developing autonomous driving systems. The competition showed significant advancements in the field, with participants demonstrating models that excelled in semantic segmentation and prioritized safety and robustness in unstructured and adverse conditions. The results of the competition set new benchmarks in the domain, highlighting the critical role of safety in deploying autonomous vehicles in real-world scenarios. The contributions from this competition are expected to drive further innovation in autonomous driving technology, addressing the critical challenges of operating in diverse and unpredictable environments.
Related papers
- Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - A Computer Vision Approach for Autonomous Cars to Drive Safe at Construction Zone [0.0]
A car equipped with an autonomous driving system (ADS) comes with various cutting-edge functionalities such as adaptive cruise control, collision alerts, automated parking, and more.
This paper presents an innovative and highly accurate road obstacle detection model utilizing computer vision technology that can be activated in construction zones and functions under diverse drift conditions.
arXiv Detail & Related papers (2024-09-24T07:11:00Z) - Autonomous Vehicle Decision-Making Framework for Considering Malicious
Behavior at Unsignalized Intersections [7.245712580297489]
In Autonomous Vehicles, reward signals are set as regular rewards regarding feedback factors such as safety and efficiency.
In this paper, safety gains are modulated by variable weighting parameters to ensure that safety can be emphasized more in emergency situations.
The decision framework enables Autonomous Vehicles to make informed decisions when encountering vehicles with potentially malicious behaviors at unsignalized intersections.
arXiv Detail & Related papers (2024-09-11T03:57:44Z) - Real-Time Environment Condition Classification for Autonomous Vehicles [3.8514288339458718]
We train a deep learning model to identify outdoor weather and dangerous road conditions.
We achieve this by introducing an improved taxonomy and label hierarchy for a state-of-the-art adverse-weather dataset.
We train RECNet, a deep learning model for the classification of environment conditions from a single RGB frame.
arXiv Detail & Related papers (2024-05-29T17:29:55Z) - The RoboDrive Challenge: Drive Anytime Anywhere in Any Condition [136.32656319458158]
The 2024 RoboDrive Challenge was crafted to propel the development of driving perception technologies.
This year's challenge consisted of five distinct tracks and attracted 140 registered teams from 93 institutes across 11 countries.
The competition culminated in 15 top-performing solutions.
arXiv Detail & Related papers (2024-05-14T17:59:57Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - ISSAFE: Improving Semantic Segmentation in Accidents by Fusing
Event-based Data [34.36975697486129]
We present a rarely addressed task regarding semantic segmentation in accidental scenarios, along with an accident dataset DADA-seg.
We propose a novel event-based multi-modal segmentation architecture ISSAFE.
Our approach achieves +8.2% mIoU performance gain on the proposed evaluation set, exceeding more than 10 state-of-the-art segmentation methods.
arXiv Detail & Related papers (2020-08-20T14:03:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.