Advancing Autonomous Driving Perception: Analysis of Sensor Fusion and Computer Vision Techniques
- URL: http://arxiv.org/abs/2411.10535v1
- Date: Fri, 15 Nov 2024 19:11:58 GMT
- Title: Advancing Autonomous Driving Perception: Analysis of Sensor Fusion and Computer Vision Techniques
- Authors: Urvishkumar Bharti, Vikram Shahapur,
- Abstract summary: This project focuses on enhancing the understanding and navigation capabilities of self-driving robots.
It explores how we can perform better navigation into unknown map 2D map with existing detection and tracking algorithms.
- Score: 0.0
- License:
- Abstract: In autonomous driving, perception systems are piv otal as they interpret sensory data to understand the envi ronment, which is essential for decision-making and planning. Ensuring the safety of these perception systems is fundamental for achieving high-level autonomy, allowing us to confidently delegate driving and monitoring tasks to machines. This re port aims to enhance the safety of perception systems by examining and summarizing the latest advancements in vision based systems, and metrics for perception tasks in autonomous driving. The report also underscores significant achievements and recognized challenges faced by current research in this field. This project focuses on enhancing the understanding and navigation capabilities of self-driving robots through depth based perception and computer vision techniques. Specifically, it explores how we can perform better navigation into unknown map 2D map with existing detection and tracking algorithms and on top of that how depth based perception can enhance the navigation capabilities of the wheel based bots to improve autonomous driving perception.
Related papers
- A Comprehensive Review of 3D Object Detection in Autonomous Driving: Technological Advances and Future Directions [11.071271817366739]
3D object perception has become a crucial component in the development of autonomous driving systems.
This review extensively summarizes traditional 3D object detection methods, focusing on camera-based, LiDAR-based, and fusion detection techniques.
We discuss future directions, including methods to improve accuracy such as temporal perception, occupancy grids, and end-to-end learning frameworks.
arXiv Detail & Related papers (2024-08-28T01:08:33Z) - Panoptic Perception for Autonomous Driving: A Survey [0.0]
This survey reviews typical panoptic perception models and compares them to performance, responsiveness, and resource utilization.
It also delves into the prevailing challenges faced in panoptic perception and explores potential trajectories for future research.
arXiv Detail & Related papers (2024-08-27T20:14:42Z) - Research on the Application of Computer Vision Based on Deep Learning in Autonomous Driving Technology [9.52658065214428]
This article analyzes in detail the application of deep learning in image recognition, real-time target tracking and classification, environment perception and decision support, and path planning and navigation.
The proposed system has an accuracy of over 98% in image recognition, target tracking and classification, and also demonstrates efficient performance and practicality.
arXiv Detail & Related papers (2024-06-01T16:41:24Z) - Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Deep Learning for Embodied Vision Navigation: A Survey [108.13766213265069]
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.
This paper attempts to establish an outline of the current works in the field of embodied visual navigation by providing a comprehensive literature survey.
arXiv Detail & Related papers (2021-07-07T12:09:04Z) - Improving Robustness of Learning-based Autonomous Steering Using
Adversarial Images [58.287120077778205]
We introduce a framework for analyzing robustness of the learning algorithm w.r.t varying quality in the image input for autonomous driving.
Using the results of sensitivity analysis, we propose an algorithm to improve the overall performance of the task of "learning to steer"
arXiv Detail & Related papers (2021-02-26T02:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.