OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy
Prediction
- URL: http://arxiv.org/abs/2304.05316v1
- Date: Tue, 11 Apr 2023 16:15:50 GMT
- Title: OccFormer: Dual-path Transformer for Vision-based 3D Semantic Occupancy
Prediction
- Authors: Yunpeng Zhang, Zheng Zhu, Dalong Du
- Abstract summary: OccFormer is a dual-path transformer network to process the 3D volume for semantic occupancy prediction.
It achieves a long-range, dynamic, and efficient encoding of the camera-generated 3D voxel features.
- Score: 16.66987810790077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vision-based perception for autonomous driving has undergone a
transformation from the bird-eye-view (BEV) representations to the 3D semantic
occupancy. Compared with the BEV planes, the 3D semantic occupancy further
provides structural information along the vertical direction. This paper
presents OccFormer, a dual-path transformer network to effectively process the
3D volume for semantic occupancy prediction. OccFormer achieves a long-range,
dynamic, and efficient encoding of the camera-generated 3D voxel features. It
is obtained by decomposing the heavy 3D processing into the local and global
transformer pathways along the horizontal plane. For the occupancy decoder, we
adapt the vanilla Mask2Former for 3D semantic occupancy by proposing
preserve-pooling and class-guided sampling, which notably mitigate the sparsity
and class imbalance. Experimental results demonstrate that OccFormer
significantly outperforms existing methods for semantic scene completion on
SemanticKITTI dataset and for LiDAR semantic segmentation on nuScenes dataset.
Code is available at \url{https://github.com/zhangyp15/OccFormer}.
Related papers
- SliceOcc: Indoor 3D Semantic Occupancy Prediction with Vertical Slice Representation [50.420711084672966]
We present SliceOcc, an RGB camera-based model specifically tailored for indoor 3D semantic occupancy prediction.
Experimental results on the EmbodiedScan dataset demonstrate that SliceOcc achieves a mIoU of 15.45% across 81 indoor categories.
arXiv Detail & Related papers (2025-01-28T03:41:24Z) - Bootstraping Clustering of Gaussians for View-consistent 3D Scene Understanding [59.51535163599723]
FreeGS is an unsupervised semantic-embedded 3DGS framework that achieves view-consistent 3D scene understanding without the need for 2D labels.
We show that FreeGS performs comparably to state-of-the-art methods while avoiding the complex data preprocessing workload.
arXiv Detail & Related papers (2024-11-29T08:52:32Z) - ALOcc: Adaptive Lifting-based 3D Semantic Occupancy and Cost Volume-based Flow Prediction [89.89610257714006]
Existing methods prioritize higher accuracy to cater to the demands of these tasks.
We introduce a series of targeted improvements for 3D semantic occupancy prediction and flow estimation.
Our purelytemporalal architecture framework, named ALOcc, achieves an optimal tradeoff between speed and accuracy.
arXiv Detail & Related papers (2024-11-12T11:32:56Z) - GEOcc: Geometrically Enhanced 3D Occupancy Network with Implicit-Explicit Depth Fusion and Contextual Self-Supervision [49.839374549646884]
This paper presents GEOcc, a Geometric-Enhanced Occupancy network tailored for vision-only surround-view perception.
Our approach achieves State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image resolution needed and the most weightless image backbone.
arXiv Detail & Related papers (2024-05-17T07:31:20Z) - Volumetric Environment Representation for Vision-Language Navigation [66.04379819772764]
Vision-language navigation (VLN) requires an agent to navigate through a 3D environment based on visual observations and natural language instructions.
We introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.
VER predicts 3D occupancy, 3D room layout, and 3D bounding boxes jointly.
arXiv Detail & Related papers (2024-03-21T06:14:46Z) - Large Generative Model Assisted 3D Semantic Communication [51.17527319441436]
We propose a Generative AI Model assisted 3D SC (GAM-3DSC) system.
First, we introduce a 3D Semantic Extractor (3DSE) to extract key semantics from a 3D scenario based on user requirements.
We then present an Adaptive Semantic Compression Model (ASCM) for encoding these multi-perspective images.
Finally, we design a conditional Generative adversarial network and Diffusion model aided-Channel Estimation (GDCE) to estimate and refine the Channel State Information (CSI) of physical channels.
arXiv Detail & Related papers (2024-03-09T03:33:07Z) - FastOcc: Accelerating 3D Occupancy Prediction by Fusing the 2D
Bird's-Eye View and Perspective View [46.81548000021799]
In autonomous driving, 3D occupancy prediction outputs voxel-wise status and semantic labels for more comprehensive understandings of 3D scenes.
Recent researchers have extensively explored various aspects of this task, including view transformation techniques, ground-truth label generation, and elaborate network design.
A new method, dubbed FastOcc, is proposed to accelerate the model while keeping its accuracy.
Experiments on the Occ3D-nuScenes benchmark demonstrate that our FastOcc achieves a fast inference speed.
arXiv Detail & Related papers (2024-03-05T07:01:53Z) - InverseMatrixVT3D: An Efficient Projection Matrix-Based Approach for 3D Occupancy Prediction [11.33083039877258]
InverseMatrixVT3D is an efficient method for transforming multi-view image features into 3D feature volumes for semantic occupancy prediction.
We introduce a sparse matrix handling technique for the projection matrices to optimize GPU memory usage.
Our approach achieves the top performance in detecting vulnerable road users (VRU), crucial for autonomous driving and road safety.
arXiv Detail & Related papers (2024-01-23T01:11:10Z) - MeT: A Graph Transformer for Semantic Segmentation of 3D Meshes [10.667492516216887]
We propose a transformer-based method for semantic segmentation of 3D mesh.
We perform positional encoding by means of the Laplacian eigenvectors of the adjacency matrix.
We show how the proposed approach yields state-of-the-art performance on semantic segmentation of 3D meshes.
arXiv Detail & Related papers (2023-07-03T15:45:14Z) - Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous
Driving [34.368848580725576]
We develop a label generation pipeline that produces dense, visibility-aware labels for any given scene.
This pipeline comprises three stages: voxel densification, reasoning, and image-guided voxel refinement.
We propose a new model, dubbed Coarse-to-Fine Occupancy (CTF-Occ) network, which demonstrates superior performance on the Occ3D benchmarks.
arXiv Detail & Related papers (2023-04-27T17:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.