Depth and Image Fusion for Road Obstacle Detection Using Stereo Camera
- URL: http://arxiv.org/abs/2501.07245v1
- Date: Mon, 13 Jan 2025 11:54:26 GMT
- Title: Depth and Image Fusion for Road Obstacle Detection Using Stereo Camera
- Authors: Oleg Perezyabov, Mikhail Gavrilenkov, Ilya Afanasyev,
- Abstract summary: This paper is devoted to the detection of objects on a road, performed with a combination of two methods.
Since neither the time of the appearance of an object on the road, nor its size and shape is known in advance, ML/DL-based approaches are not applicable.
To solve this problem we developed the depth and image fusion method that complements a search of small contrast objects by RGB-based method, and obstacle detection by stereo image-based approach with SLIC superpixel segmentation.
- Score: 0.0
- License:
- Abstract: This paper is devoted to the detection of objects on a road, performed with a combination of two methods based on both the use of depth information and video analysis of data from a stereo camera. Since neither the time of the appearance of an object on the road, nor its size and shape is known in advance, ML/DL-based approaches are not applicable. The task becomes more complicated due to variations in artificial illumination, inhomogeneous road surface texture, and unknown character and features of the object. To solve this problem we developed the depth and image fusion method that complements a search of small contrast objects by RGB-based method, and obstacle detection by stereo image-based approach with SLIC superpixel segmentation. We conducted experiments with static and low speed obstacles in an underground parking lot and demonstrated the successful work of the developed technique for detecting and even tracking small objects, which can be parking infrastructure objects, things left on the road, wheels, dropped boxes, etc.
Related papers
- Retrieval Robust to Object Motion Blur [54.34823913494456]
We propose a method for object retrieval in images that are affected by motion blur.
We present the first large-scale datasets for blurred object retrieval.
Our method outperforms state-of-the-art retrieval methods on the new blur-retrieval datasets.
arXiv Detail & Related papers (2024-04-27T23:22:39Z) - MOSE: Boosting Vision-based Roadside 3D Object Detection with Scene Cues [12.508548561872553]
We propose a novel framework, namely MOSE, for MOnocular 3D object detection with Scene cuEs.
A scene cue bank is designed to aggregate scene cues from multiple frames of the same scene.
A transformer-based decoder lifts the aggregated scene cues as well as the 3D position embeddings for 3D object location.
arXiv Detail & Related papers (2024-04-08T08:11:56Z) - SalienDet: A Saliency-based Feature Enhancement Algorithm for Object
Detection for Autonomous Driving [160.57870373052577]
We propose a saliency-based OD algorithm (SalienDet) to detect unknown objects.
Our SalienDet utilizes a saliency-based algorithm to enhance image features for object proposal generation.
We design a dataset relabeling approach to differentiate the unknown objects from all objects in training sample set to achieve Open-World Detection.
arXiv Detail & Related papers (2023-05-11T16:19:44Z) - Spatio-Temporal Context Modeling for Road Obstacle Detection [12.464149169670735]
A data-driven context-temporal model of the driving scene is constructed with the layouts of the training data.
Obstacles are detected via state-of-the-art object detection algorithms, and the results are combined with the generated scene.
arXiv Detail & Related papers (2023-01-19T07:06:35Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - Perspective Aware Road Obstacle Detection [104.57322421897769]
We show that road obstacle detection techniques ignore the fact that, in practice, the apparent size of the obstacles decreases as their distance to the vehicle increases.
We leverage this by computing a scale map encoding the apparent size of a hypothetical object at every image location.
We then leverage this perspective map to generate training data by injecting onto the road synthetic objects whose size corresponds to the perspective foreshortening.
arXiv Detail & Related papers (2022-10-04T17:48:42Z) - CrossDTR: Cross-view and Depth-guided Transformers for 3D Object
Detection [10.696619570924778]
We propose Cross-view and Depth-guided Transformers for 3D Object Detection, CrossDTR.
Our method hugely surpassed existing multi-camera methods by 10 percent in pedestrian detection and about 3 percent in overall mAP and NDS metrics.
arXiv Detail & Related papers (2022-09-27T16:23:12Z) - You Better Look Twice: a new perspective for designing accurate
detectors with reduced computations [56.34005280792013]
BLT-net is a new low-computation two-stage object detection architecture.
It reduces computations by separating objects from background using a very lite first-stage.
Resulting image proposals are then processed in the second-stage by a highly accurate model.
arXiv Detail & Related papers (2021-07-21T12:39:51Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Moving object detection for visual odometry in a dynamic environment
based on occlusion accumulation [31.143322364794894]
We propose a moving object detection algorithm that uses RGB-D images.
The proposed algorithm does not require estimating a background model.
We use dense visual odometry (DVO) as a VO method with a bi-square regression weight.
arXiv Detail & Related papers (2020-09-18T11:01:46Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.