CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene
Completion by Dense Feature Fusion
- URL: http://arxiv.org/abs/2211.13702v1
- Date: Thu, 24 Nov 2022 16:36:42 GMT
- Title: CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene
Completion by Dense Feature Fusion
- Authors: Jinfeng Xu, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu, Min
Chen
- Abstract summary: We present CasFusionNet, a novel cascaded network for point cloud semantic scene completion by dense feature fusion.
We organize the above three modules via dense feature fusion in each level, and cascade a total of four levels, where we also employ feature fusion between each level for sufficient information usage.
- Score: 14.34344002500153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic scene completion (SSC) aims to complete a partial 3D scene and
predict its semantics simultaneously. Most existing works adopt the voxel
representations, thus suffering from the growth of memory and computation cost
as the voxel resolution increases. Though a few works attempt to solve SSC from
the perspective of 3D point clouds, they have not fully exploited the
correlation and complementarity between the two tasks of scene completion and
semantic segmentation. In our work, we present CasFusionNet, a novel cascaded
network for point cloud semantic scene completion by dense feature fusion.
Specifically, we design (i) a global completion module (GCM) to produce an
upsampled and completed but coarse point set, (ii) a semantic segmentation
module (SSM) to predict the per-point semantic labels of the completed points
generated by GCM, and (iii) a local refinement module (LRM) to further refine
the coarse completed points and the associated labels from a local perspective.
We organize the above three modules via dense feature fusion in each level, and
cascade a total of four levels, where we also employ feature fusion between
each level for sufficient information usage. Both quantitative and qualitative
results on our compiled two point-based datasets validate the effectiveness and
superiority of our CasFusionNet compared to state-of-the-art methods in terms
of both scene completion and semantic segmentation. The codes and datasets are
available at: https://github.com/JinfengX/CasFusionNet.
Related papers
- PointSSC: A Cooperative Vehicle-Infrastructure Point Cloud Benchmark for
Semantic Scene Completion [4.564209472726044]
Semantic Scene Completion aims to jointly generate space occupancies and semantic labels for complex 3D scenes.
PointSSC is the first cooperative vehicle-infrastructure point cloud benchmark for semantic scene completion.
arXiv Detail & Related papers (2023-09-22T08:39:16Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation [91.15865862160088]
We introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner.
Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views.
arXiv Detail & Related papers (2022-07-06T11:48:08Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - Completing Partial Point Clouds with Outliers by Collaborative
Completion and Segmentation [22.521376982725517]
We propose an end-to-end network, named CS-Net, to complete the point clouds contaminated by noises or containing outliers.
Our comprehensive experiments and comparisons against state-of-the-art completion methods demonstrate our superiority.
arXiv Detail & Related papers (2022-03-18T07:31:41Z) - SCSS-Net: Superpoint Constrained Semi-supervised Segmentation Network
for 3D Indoor Scenes [6.3364439467281315]
We propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net.
Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence.
arXiv Detail & Related papers (2021-07-08T04:43:21Z) - Similarity-Aware Fusion Network for 3D Semantic Segmentation [87.51314162700315]
We propose a similarity-aware fusion network (SAFNet) to adaptively fuse 2D images and 3D point clouds for 3D semantic segmentation.
We employ a late fusion strategy where we first learn the geometric and contextual similarities between the input and back-projected (from 2D pixels) point clouds.
We show that SAFNet significantly outperforms existing state-of-the-art fusion-based approaches across various data integrity.
arXiv Detail & Related papers (2021-07-04T09:28:18Z) - Semantic Segmentation for Real Point Cloud Scenes via Bilateral
Augmentation and Adaptive Fusion [38.05362492645094]
Real point cloud scenes can intuitively capture complex surroundings in the real world, but due to 3D data's raw nature, it is very challenging for machine perception.
We concentrate on the essential visual task, semantic segmentation, for large-scale point cloud data collected in reality.
By comparing with state-of-the-art networks on three different benchmarks, we demonstrate the effectiveness of our network.
arXiv Detail & Related papers (2021-03-12T04:13:20Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z) - PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation [111.7241018610573]
We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
arXiv Detail & Related papers (2020-04-03T16:26:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.