BCSSN: Bi-direction Compact Spatial Separable Network for Collision
Avoidance in Autonomous Driving
- URL: http://arxiv.org/abs/2303.06714v1
- Date: Sun, 12 Mar 2023 17:35:57 GMT
- Title: BCSSN: Bi-direction Compact Spatial Separable Network for Collision
Avoidance in Autonomous Driving
- Authors: Haichuan Li, Liguo Zhou, Alois Knoll
- Abstract summary: Rule-based systems, decision trees, Markov decision processes, and Bayesian networks have been some of the popular methods used to tackle the complexities of traffic conditions and avoid collisions.
With the emergence of deep learning, many researchers have turned towards CNN-based methods to improve the performance of collision avoidance.
We propose a CNN-based method that overcomes the limitation by establishing feature correlations between regions in sequential images using variants of attention.
- Score: 4.392212820170972
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Autonomous driving has been an active area of research and development, with
various strategies being explored for decision-making in autonomous vehicles.
Rule-based systems, decision trees, Markov decision processes, and Bayesian
networks have been some of the popular methods used to tackle the complexities
of traffic conditions and avoid collisions. However, with the emergence of deep
learning, many researchers have turned towards CNN-based methods to improve the
performance of collision avoidance. Despite the promising results achieved by
some CNN-based methods, the failure to establish correlations between
sequential images often leads to more collisions. In this paper, we propose a
CNN-based method that overcomes the limitation by establishing feature
correlations between regions in sequential images using variants of attention.
Our method combines the advantages of CNN in capturing regional features with a
bi-directional LSTM to enhance the relationship between different local areas.
Additionally, we use an encoder to improve computational efficiency. Our method
takes "Bird's Eye View" graphs generated from camera and LiDAR sensors as
input, simulates the position (x, y) and head offset angle (Yaw) to generate
future trajectories. Experiment results demonstrate that our proposed method
outperforms existing vision-based strategies, achieving an average of only 3.7
collisions per 1000 miles of driving distance on the L5kit test set. This
significantly improves the success rate of collision avoidance and provides a
promising solution for autonomous driving.
Related papers
- ContourCraft: Learning to Resolve Intersections in Neural Multi-Garment Simulations [70.38866232749886]
We present moniker, a learning-based solution for handling intersections in neural cloth simulations.
moniker robustly recovers from intersections introduced through missed collisions, self-penetrating bodies, or errors in manually designed multi-layer outfits.
arXiv Detail & Related papers (2024-05-15T17:25:59Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Sequential Spatial Network for Collision Avoidance in Autonomous Driving [5.108647313751154]
We develop an algorithm that takes into account the advantages of CNN in capturing regional features while establishing feature correlation between regions using variants of attention.
The average number of collisions is 19.4 per 10000 frames of driving distance, which greatly improves the success rate of collision avoidance.
arXiv Detail & Related papers (2023-03-12T17:43:32Z) - NeurIPS 2022 Competition: Driving SMARTS [60.948652154552136]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts.
The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods.
arXiv Detail & Related papers (2022-11-14T17:10:53Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - Active Learning of Neural Collision Handler for Complex 3D Mesh
Deformations [68.0524382279567]
We present a robust learning algorithm to detect and handle collisions in 3D deforming meshes.
Our approach outperforms supervised learning methods and achieves $93.8-98.1%$ accuracy.
arXiv Detail & Related papers (2021-10-08T04:08:31Z) - Safe Deep Q-Network for Autonomous Vehicles at Unsignalized Intersection [4.94950858749529]
We propose a safe DRL approach for navigation through crowds of pedestrians while making a left turn at an unsignalized intersection.
Our method uses two long-short term memory (LSTM) models that are trained to generate the perceived state of the environment and the future trajectories of pedestrians.
A future collision prediction algorithm based on the future trajectories of the ego vehicle and pedestrians is used to mask unsafe actions if the system predicts a collision.
arXiv Detail & Related papers (2021-06-08T17:48:56Z) - Pedestrian Collision Avoidance for Autonomous Vehicles at Unsignalized
Intersection Using Deep Q-Network [4.94950858749529]
This paper explores Autonomous Vehicle (AV) navigation in crowded, unsignalized intersections.
We compare the performance of different deep reinforcement learning methods trained on our reward function and state representation.
arXiv Detail & Related papers (2021-05-01T03:02:21Z) - Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction [71.97877759413272]
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions.
Recent methods have achieved strong performances using Multi-Choice Learning objectives like winner-takes-all (WTA) or best-of-many.
Our work addresses two key challenges in trajectory prediction, learning outputs, and better predictions by imposing constraints using driving knowledge.
arXiv Detail & Related papers (2021-04-16T17:58:56Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z) - VTGNet: A Vision-based Trajectory Generation Network for Autonomous
Vehicles in Urban Environments [26.558394047144006]
We develop an uncertainty-aware end-to-end trajectory generation method based on imitation learning.
Under various weather and lighting conditions, our network can reliably generate trajectories in different urban environments.
The proposed method achieves better cross-scene/platform driving results than the state-of-the-art (SOTA) end-to-end control method.
arXiv Detail & Related papers (2020-04-27T06:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.