Panoptic Perception for Autonomous Driving: A Survey
- URL: http://arxiv.org/abs/2408.15388v1
- Date: Tue, 27 Aug 2024 20:14:42 GMT
- Title: Panoptic Perception for Autonomous Driving: A Survey
- Authors: Yunge Li, Lanyu Xu,
- Abstract summary: This survey reviews typical panoptic perception models and compares them to performance, responsiveness, and resource utilization.
It also delves into the prevailing challenges faced in panoptic perception and explores potential trajectories for future research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Panoptic perception represents a forefront advancement in autonomous driving technology, unifying multiple perception tasks into a singular, cohesive framework to facilitate a thorough understanding of the vehicle's surroundings. This survey reviews typical panoptic perception models for their unique inputs and architectures and compares them to performance, responsiveness, and resource utilization. It also delves into the prevailing challenges faced in panoptic perception and explores potential trajectories for future research. Our goal is to furnish researchers in autonomous driving with a detailed synopsis of panoptic perception, positioning this survey as a pivotal reference in the ever-evolving landscape of autonomous driving technologies.
Related papers
- Advancing Autonomous Driving Perception: Analysis of Sensor Fusion and Computer Vision Techniques [0.0]
This project focuses on enhancing the understanding and navigation capabilities of self-driving robots.
It explores how we can perform better navigation into unknown map 2D map with existing detection and tracking algorithms.
arXiv Detail & Related papers (2024-11-15T19:11:58Z) - ZOPP: A Framework of Zero-shot Offboard Panoptic Perception for Autonomous Driving [44.174489160967056]
Offboard perception aims to automatically generate high-quality 3D labels for autonomous driving scenes.
We propose a novel Zero-shot Offboard Panoptic Perception (ZOPP) framework for autonomous driving scenes.
ZOPP integrates the powerful zero-shot recognition capabilities of vision foundation models and 3D representations derived from point clouds.
arXiv Detail & Related papers (2024-11-08T03:52:32Z) - Exploring the Interplay Between Video Generation and World Models in Autonomous Driving: A Survey [61.39993881402787]
World models and video generation are pivotal technologies in the domain of autonomous driving.
This paper investigates the relationship between these two technologies.
By analyzing the interplay between video generation and world models, this survey identifies critical challenges and future research directions.
arXiv Detail & Related papers (2024-11-05T08:58:35Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Audiovisual Affect Assessment and Autonomous Automobiles: Applications [0.0]
This contribution aims to foresee according challenges and provide potential avenues towards affect modelling in a multimodal "audiovisual plus x" on the road context.
From the technical end, this concerns holistic passenger modelling and reliable diarisation of the individuals in a vehicle.
In conclusion, automated affect analysis has just matured to the point of applicability in autonomous vehicles in first selected use-cases.
arXiv Detail & Related papers (2022-03-14T20:39:02Z) - Human-Vehicle Cooperative Visual Perception for Shared Autonomous
Driving [9.537146822132904]
This paper proposes a human-vehicle cooperative visual perception method to enhance the visual perception ability of shared autonomous driving.
Based on transfer learning, the mAP of object detection reaches 75.52% and lays a solid foundation for visual fusion.
This study pioneers a cooperative visual perception solution for shared autonomous driving and experiments in real-world complex traffic conflict scenarios.
arXiv Detail & Related papers (2021-12-17T03:17:05Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.