ECCV 2024 W-CODA: 1st Workshop on Multimodal Perception and Comprehension of Corner Cases in Autonomous Driving
- URL: http://arxiv.org/abs/2507.01735v1
- Date: Wed, 02 Jul 2025 14:10:25 GMT
- Title: ECCV 2024 W-CODA: 1st Workshop on Multimodal Perception and Comprehension of Corner Cases in Autonomous Driving
- Authors: Kai Chen, Ruiyuan Gao, Lanqing Hong, Hang Xu, Xu Jia, Holger Caesar, Dengxin Dai, Bingbing Liu, Dzmitry Tsishkou, Songcen Xu, Chunjing Xu, Qiang Xu, Huchuan Lu, Dit-Yan Yeung,
- Abstract summary: We present details of the 1st W-CODA workshop, held in conjunction with the ECCV 2024.<n>W-CODA aims to explore next-generation solutions for autonomous driving corner cases, empowered by state-of-the-art multimodal perception and comprehension techniques.
- Score: 142.17164272038445
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present details of the 1st W-CODA workshop, held in conjunction with the ECCV 2024. W-CODA aims to explore next-generation solutions for autonomous driving corner cases, empowered by state-of-the-art multimodal perception and comprehension techniques. 5 Speakers from both academia and industry are invited to share their latest progress and opinions. We collect research papers and hold a dual-track challenge, including both corner case scene understanding and generation. As the pioneering effort, we will continuously bridge the gap between frontier autonomous driving techniques and fully intelligent, reliable self-driving agents robust towards corner cases.
Related papers
- Research Challenges and Progress in the End-to-End V2X Cooperative Autonomous Driving Competition [57.698383942708]
Vehicle-to-everything (V2X) communication has emerged as a key enabler for extending perception range and enhancing driving safety.<n>We organized the End-to-End Autonomous Driving through V2X Cooperation Challenge, which features two tracks: cooperative temporal perception and cooperative end-to-end planning.<n>This paper describes the design and outcomes of the challenge, highlights key research problems including bandwidth-aware fusion, robust multi-agent planning, and heterogeneous sensor integration.
arXiv Detail & Related papers (2025-07-29T09:06:40Z) - Out-of-Distribution Segmentation in Autonomous Driving: Problems and State of the Art [1.3654846342364308]
We review the state of the art in Out-of-Distribution (OoD) segmentation, with a focus on road obstacle detection in automated driving as a real-world application.<n>We analyse the performance of existing methods on two widely used benchmarks, SegmentMeIfYouCan Obstacle Track and LostAndFound-NoKnown.
arXiv Detail & Related papers (2025-03-04T22:52:38Z) - The Role of World Models in Shaping Autonomous Driving: A Comprehensive Survey [50.62538723793247]
Driving World Model (DWM) focuses on predicting scene evolution during the driving process.<n>DWM methods enable autonomous driving systems to better perceive, understand, and interact with dynamic driving environments.
arXiv Detail & Related papers (2025-02-14T18:43:15Z) - Driving with InternVL: Oustanding Champion in the Track on Driving with Language of the Autonomous Grand Challenge at CVPR 2024 [23.193095382776725]
This report describes the methods we employed for the Driving with Language track of the CVPR 2024 Autonomous Grand Challenge.<n>We utilized a powerful open-source multimodal model, InternVL-1.5, and conducted a full-fledged fine-tuning on the competition dataset, DriveLM-nuScenes.<n>Our single model achieved a score of 0.6002 on the final leadboard.
arXiv Detail & Related papers (2024-12-10T07:13:39Z) - Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Learn-to-Race Challenge 2022: Benchmarking Safe Learning and
Cross-domain Generalisation in Autonomous Racing [12.50944966521162]
We present the results of our autonomous racing virtual challenge, based on the newly-released Learn-to-Race (L2R) simulation framework.
In this paper, we describe the new L2R Task 2.0 benchmark, with refined metrics and baseline approaches.
We also provide an overview of deployment, evaluation, and rankings for the inaugural instance of the L2R Autonomous Racing Virtual Challenge.
arXiv Detail & Related papers (2022-05-05T22:31:19Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Indy Autonomous Challenge -- Autonomous Race Cars at the Handling Limits [81.22616193933021]
The team TUM Auton-omous Motorsports will participate in the Indy Autonomous Challenge in Octo-ber 2021.
It will benchmark its self-driving software-stack by racing one out of ten autonomous Dallara AV-21 racecars at the Indianapolis Motor Speedway.
It is an ideal testing ground for the development of autonomous driving algorithms capable of mastering the most challenging and rare situations.
arXiv Detail & Related papers (2022-02-08T11:55:05Z) - A Software Architecture for Autonomous Vehicles: Team LRM-B Entry in the
First CARLA Autonomous Driving Challenge [49.976633450740145]
This paper presents the architecture design for the navigation of an autonomous vehicle in a simulated urban environment.
Our architecture was made towards meeting the requirements of CARLA Autonomous Driving Challenge.
arXiv Detail & Related papers (2020-10-23T18:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.