Anisotropic Convolutional Networks for 3D Semantic Scene Completion
- URL: http://arxiv.org/abs/2004.02122v1
- Date: Sun, 5 Apr 2020 07:57:02 GMT
- Title: Anisotropic Convolutional Networks for 3D Semantic Scene Completion
- Authors: Jie Li, Kai Han, Peng Wang, Yu Liu, Xia Yuan
- Abstract summary: semantic scene completion (SSC) tries to simultaneously infer the occupancy and semantic labels for a scene from a single depth and/or RGB image.
We propose a novel module called anisotropic convolution, which properties with flexibility and power impossible for competing methods.
In contrast to the standard 3D convolution that is limited to a fixed 3D receptive field, our module is capable of modeling the dimensional anisotropy voxel-wisely.
- Score: 24.9671648682339
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As a voxel-wise labeling task, semantic scene completion (SSC) tries to
simultaneously infer the occupancy and semantic labels for a scene from a
single depth and/or RGB image. The key challenge for SSC is how to effectively
take advantage of the 3D context to model various objects or stuffs with severe
variations in shapes, layouts and visibility. To handle such variations, we
propose a novel module called anisotropic convolution, which properties with
flexibility and power impossible for the competing methods such as standard 3D
convolution and some of its variations. In contrast to the standard 3D
convolution that is limited to a fixed 3D receptive field, our module is
capable of modeling the dimensional anisotropy voxel-wisely. The basic idea is
to enable anisotropic 3D receptive field by decomposing a 3D convolution into
three consecutive 1D convolutions, and the kernel size for each such 1D
convolution is adaptively determined on the fly. By stacking multiple such
anisotropic convolution modules, the voxel-wise modeling capability can be
further enhanced while maintaining a controllable amount of model parameters.
Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the
superior performance of the proposed method. Our code is available at
https://waterljwant.github.io/SSC/
Related papers
- Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding [83.63231467746598]
We introduce Any2Point, a parameter-efficient method to empower any-modality large models (vision, language, audio) for 3D understanding.
We propose a 3D-to-any (1D or 2D) virtual projection strategy that correlates the input 3D points to the original 1D or 2D positions within the source modality.
arXiv Detail & Related papers (2024-04-11T17:59:45Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - NDC-Scene: Boost Monocular 3D Semantic Scene Completion in Normalized
Device Coordinates Space [77.6067460464962]
Monocular 3D Semantic Scene Completion (SSC) has garnered significant attention in recent years due to its potential to predict complex semantics and geometry shapes from a single image, requiring no 3D inputs.
We identify several critical issues in current state-of-the-art methods, including the Feature Ambiguity of projected 2D features in the ray to the 3D space, the Pose Ambiguity of the 3D convolution, and the Imbalance in the 3D convolution across different depth levels.
We devise a novel Normalized Device Coordinates scene completion network (NDC-Scene) that directly extends the 2
arXiv Detail & Related papers (2023-09-26T02:09:52Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - Group Shift Pointwise Convolution for Volumetric Medical Image
Segmentation [31.72090839643412]
We introduce a novel Group Shift Pointwise Convolution (GSP-Conv) to improve the effectiveness and efficiency of 3D convolutions.
GSP-Conv simplifies 3D convolutions into pointwise ones with 1x1x1 kernels, which dramatically reduces the number of model parameters and FLOPs.
Results show that our method, with substantially decreased model complexity, achieves comparable or even better performance than models employing 3D convolutions.
arXiv Detail & Related papers (2021-09-26T15:27:33Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.