FastOcc: Accelerating 3D Occupancy Prediction by Fusing the 2D
Bird's-Eye View and Perspective View
- URL: http://arxiv.org/abs/2403.02710v1
- Date: Tue, 5 Mar 2024 07:01:53 GMT
- Title: FastOcc: Accelerating 3D Occupancy Prediction by Fusing the 2D
Bird's-Eye View and Perspective View
- Authors: Jiawei Hou, Xiaoyan Li, Wenhao Guan, Gang Zhang, Di Feng, Yuheng Du,
Xiangyang Xue, Jian Pu
- Abstract summary: In autonomous driving, 3D occupancy prediction outputs voxel-wise status and semantic labels for more comprehensive understandings of 3D scenes.
Recent researchers have extensively explored various aspects of this task, including view transformation techniques, ground-truth label generation, and elaborate network design.
A new method, dubbed FastOcc, is proposed to accelerate the model while keeping its accuracy.
Experiments on the Occ3D-nuScenes benchmark demonstrate that our FastOcc achieves a fast inference speed.
- Score: 46.81548000021799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving, 3D occupancy prediction outputs voxel-wise status and
semantic labels for more comprehensive understandings of 3D scenes compared
with traditional perception tasks, such as 3D object detection and bird's-eye
view (BEV) semantic segmentation. Recent researchers have extensively explored
various aspects of this task, including view transformation techniques,
ground-truth label generation, and elaborate network design, aiming to achieve
superior performance. However, the inference speed, crucial for running on an
autonomous vehicle, is neglected. To this end, a new method, dubbed FastOcc, is
proposed. By carefully analyzing the network effect and latency from four
parts, including the input image resolution, image backbone, view
transformation, and occupancy prediction head, it is found that the occupancy
prediction head holds considerable potential for accelerating the model while
keeping its accuracy. Targeted at improving this component, the time-consuming
3D convolution network is replaced with a novel residual-like architecture,
where features are mainly digested by a lightweight 2D BEV convolution network
and compensated by integrating the 3D voxel features interpolated from the
original image features. Experiments on the Occ3D-nuScenes benchmark
demonstrate that our FastOcc achieves state-of-the-art results with a fast
inference speed.
Related papers
- HENet: Hybrid Encoding for End-to-end Multi-task 3D Perception from Multi-view Cameras [45.739224968302565]
We present an end-to-end framework named HENet for multi-task 3D perception.
Specifically, we propose a hybrid image encoding network, using a large image encoder for short-term frames and a small image encoder for long-term temporal frames.
According to the characteristics of each perception task, we utilize BEV features of different grid sizes, independent BEV encoders, and task decoders for different tasks.
arXiv Detail & Related papers (2024-04-03T07:10:18Z) - Unified Spatio-Temporal Tri-Perspective View Representation for 3D Semantic Occupancy Prediction [6.527178779672975]
This study introduces architecture2TPVFormer for temporally coherent 3D semantic occupancy prediction.
We enrich the prior process by including temporal cues using a novel temporal cross-view hybrid attention mechanism.
Experimental evaluations demonstrate a substantial 4.1% improvement in mean Intersection over Union for 3D Semantic Occupancy.
arXiv Detail & Related papers (2024-01-24T20:06:59Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - SOGDet: Semantic-Occupancy Guided Multi-view 3D Object Detection [19.75965521357068]
We propose a novel approach called SOGDet (Semantic-Occupancy Guided Multi-view 3D Object Detection) to improve the accuracy of 3D object detection.
Our results show that SOGDet consistently enhance the performance of three baseline methods in terms of nuScenes Detection Score (NDS) and mean Average Precision (mAP)
This indicates that the combination of 3D object detection and 3D semantic occupancy leads to a more comprehensive perception of the 3D environment, thereby aiding build more robust autonomous driving systems.
arXiv Detail & Related papers (2023-08-26T07:38:21Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - CVFNet: Real-time 3D Object Detection by Learning Cross View Features [11.402076835949824]
We present a real-time view-based single stage 3D object detector, namely CVFNet.
We first propose a novel Point-Range feature fusion module that deeply integrates point and range view features in multiple stages.
Then, a special Slice Pillar is designed to well maintain the 3D geometry when transforming the obtained deep point-view features into bird's eye view.
arXiv Detail & Related papers (2022-03-13T06:23:18Z) - Improving 3D Object Detection with Channel-wise Transformer [58.668922561622466]
We propose a two-stage 3D object detection framework (CT3D) with minimal hand-crafted design.
CT3D simultaneously performs proposal-aware embedding and channel-wise context aggregation.
It achieves the AP of 81.77% in the moderate car category on the KITTI test 3D detection benchmark.
arXiv Detail & Related papers (2021-08-23T02:03:40Z) - Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion
Forecasting with a Single Convolutional Net [93.51773847125014]
We propose a novel deep neural network that is able to jointly reason about 3D detection, tracking and motion forecasting given data captured by a 3D sensor.
Our approach performs 3D convolutions across space and time over a bird's eye view representation of the 3D world.
arXiv Detail & Related papers (2020-12-22T22:43:35Z) - Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution [34.713667358316286]
Self-driving cars need to understand 3D scenes efficiently and accurately in order to drive safely.
Existing 3D perception models are not able to recognize small instances very well due to the low-resolution voxelization and aggressive downsampling.
We propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch.
arXiv Detail & Related papers (2020-07-31T14:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.