Detecting and Mapping Trees in Unstructured Environments with a Stereo
Camera and Pseudo-Lidar
- URL: http://arxiv.org/abs/2103.15967v1
- Date: Mon, 29 Mar 2021 21:46:57 GMT
- Title: Detecting and Mapping Trees in Unstructured Environments with a Stereo
Camera and Pseudo-Lidar
- Authors: Brian H. Wang, Carlos Diaz-Ruiz, Jacopo Banfi, and Mark Campbell
- Abstract summary: We present a method for detecting and mapping trees in noisy stereo camera point clouds.
Inspired by recent advancements in 3-D object detection, we train a PointRCNN detector to recognize trees in forest-like environments.
Results demonstrate robust tree recognition in noisy stereo data at ranges of up to 7 meters, on 720p resolution images from a Stereolabs ZED 2 camera.
- Score: 3.9243546740194586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for detecting and mapping trees in noisy stereo camera
point clouds, using a learned 3-D object detector. Inspired by recent
advancements in 3-D object detection using a pseudo-lidar representation for
stereo data, we train a PointRCNN detector to recognize trees in forest-like
environments. We generate detector training data with a novel automatic
labeling process that clusters a fused global point cloud. This process
annotates large stereo point cloud training data sets with minimal user
supervision, and unlike previous pseudo-lidar detection pipelines, requires no
3-D ground truth from other sensors such as lidar. Our mapping system
additionally uses a Kalman filter to associate detections and consistently
estimate the positions and sizes of trees. We collect a data set for tree
detection consisting of 8680 stereo point clouds, and validate our method on an
outdoors test sequence. Our results demonstrate robust tree recognition in
noisy stereo data at ranges of up to 7 meters, on 720p resolution images from a
Stereolabs ZED 2 camera. Code and data are available at
https://github.com/brian-h-wang/pseudolidar-tree-detection.
Related papers
- Lightweight Multi-Drone Detection and 3D-Localization via YOLO [1.284647943889634]
We present and evaluate a method to perform real-time multiple drone detection and three-dimensional localization.
We use state-of-the-art tiny-YOLOv4 object detection algorithm and stereo triangulation.
Our computer vision approach eliminates the need for computationally expensive stereo matching algorithms.
arXiv Detail & Related papers (2022-02-18T09:41:23Z) - LIGA-Stereo: Learning LiDAR Geometry Aware Representations for
Stereo-based 3D Detector [80.7563981951707]
We propose LIGA-Stereo to learn stereo-based 3D detectors under the guidance of high-level geometry-aware representations of LiDAR-based detection models.
Compared with the state-of-the-art stereo detector, our method has improved the 3D detection performance of cars, pedestrians, cyclists by 10.44%, 5.69%, 5.97% mAP respectively.
arXiv Detail & Related papers (2021-08-18T17:24:40Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - Dynamic Edge Weights in Graph Neural Networks for 3D Object Detection [0.0]
We propose an attention based feature aggregation technique in graph neural network (GNN) for detecting objects in LiDAR scan.
In each layer of the GNN, apart from the linear transformation which maps the per node input features to the corresponding higher level features, a per node masked attention is also performed.
The experiments on KITTI dataset show that our method yields comparable results for 3D object detection.
arXiv Detail & Related papers (2020-09-17T12:56:17Z) - Expandable YOLO: 3D Object Detection from RGB-D Images [64.14512458954344]
This paper aims at constructing a light-weight object detector that inputs a depth and a color image from a stereo camera.
By extending the network architecture of YOLOv3 to 3D in the middle, it is possible to output in the depth direction.
Intersection over Uninon (IoU) in 3D space is introduced to confirm the accuracy of region extraction results.
arXiv Detail & Related papers (2020-06-26T07:32:30Z) - Stereo RGB and Deeper LIDAR Based Network for 3D Object Detection [40.34710686994996]
3D object detection has become an emerging task in autonomous driving scenarios.
Previous works process 3D point clouds using either projection-based or voxel-based models.
We propose the Stereo RGB and Deeper LIDAR framework which can utilize semantic and spatial information simultaneously.
arXiv Detail & Related papers (2020-06-09T11:19:24Z) - 3D Object Detection Method Based on YOLO and K-Means for Image and Point
Clouds [1.9458156037869139]
Lidar based 3D object detection and classification tasks are essential for autonomous driving.
This paper proposes a 3D object detection method based on point cloud and image.
arXiv Detail & Related papers (2020-04-21T04:32:36Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - DSGN: Deep Stereo Geometry Network for 3D Object Detection [79.16397166985706]
There is a large performance gap between image-based and LiDAR-based 3D object detectors.
Our method, called Deep Stereo Geometry Network (DSGN), significantly reduces this gap.
For the first time, we provide a simple and effective one-stage stereo-based 3D detection pipeline.
arXiv Detail & Related papers (2020-01-10T11:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.