PSSNet: Planarity-sensible Semantic Segmentation of Large-scale Urban
Meshes
- URL: http://arxiv.org/abs/2202.03209v2
- Date: Wed, 9 Feb 2022 09:22:46 GMT
- Title: PSSNet: Planarity-sensible Semantic Segmentation of Large-scale Urban
Meshes
- Authors: Weixiao Gao, Liangliang Nan, Bas Boom, Hugo Ledoux
- Abstract summary: We introduce a novel deep learning-based framework to interpret 3D urban scenes represented as textured meshes.
Our framework achieves semantic segmentation in two steps: planarity-sensible over-segmentation followed by semantic classification.
Our approach outperforms the state-of-the-art methods in terms of boundary quality and mean IoU (intersection over union)
- Score: 3.058685580689605
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a novel deep learning-based framework to interpret 3D urban
scenes represented as textured meshes. Based on the observation that object
boundaries typically align with the boundaries of planar regions, our framework
achieves semantic segmentation in two steps: planarity-sensible
over-segmentation followed by semantic classification. The over-segmentation
step generates an initial set of mesh segments that capture the planar and
non-planar regions of urban scenes. In the subsequent classification step, we
construct a graph that encodes geometric and photometric features of the
segments in its nodes and multi-scale contextual features in its edges. The
final semantic segmentation is obtained by classifying the segments using a
graph convolutional network. Experiments and comparisons on a large semantic
urban mesh benchmark demonstrate that our approach outperforms the
state-of-the-art methods in terms of boundary quality and mean IoU
(intersection over union). Besides, we also introduce several new metrics for
evaluating mesh over-segmentation methods dedicated for semantic segmentation,
and our proposed over-segmentation approach outperforms state-of-the-art
methods on all metrics. Our source code will be released when the paper is
accepted.
Related papers
- SPIN: Hierarchical Segmentation with Subpart Granularity in Natural Images [17.98848062686217]
We introduce the first hierarchical semantic segmentation dataset with subpart annotations for natural images.
We also introduce two novel evaluation metrics to evaluate how well algorithms capture spatial and semantic relationships across hierarchical levels.
arXiv Detail & Related papers (2024-07-12T21:08:00Z) - Parsing Line Segments of Floor Plan Images Using Graph Neural Networks [0.0]
We use a junction heatmap to predict line segments' endpoints, and graph neural networks to extract line segments and their categories.
Our proposed method is able to output vectorized line segment and requires less post-processing steps to be put into practical use.
arXiv Detail & Related papers (2023-03-07T12:32:19Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - SemAffiNet: Semantic-Affine Transformation for Point Cloud Segmentation [94.11915008006483]
We propose SemAffiNet for point cloud semantic segmentation.
We conduct extensive experiments on the ScanNetV2 and NYUv2 datasets.
arXiv Detail & Related papers (2022-05-26T17:00:23Z) - Scaling up Multi-domain Semantic Segmentation with Sentence Embeddings [81.09026586111811]
We propose an approach to semantic segmentation that achieves state-of-the-art supervised performance when applied in a zero-shot setting.
This is achieved by replacing each class label with a vector-valued embedding of a short paragraph that describes the class.
The resulting merged semantic segmentation dataset of over 2 Million images enables training a model that achieves performance equal to that of state-of-the-art supervised methods on 7 benchmark datasets.
arXiv Detail & Related papers (2022-02-04T07:19:09Z) - TransFGU: A Top-down Approach to Fine-Grained Unsupervised Semantic
Segmentation [44.75300205362518]
Unsupervised semantic segmentation aims to obtain high-level semantic representation on low-level visual features without manual annotations.
We propose the first top-down unsupervised semantic segmentation framework for fine-grained segmentation in extremely complicated scenarios.
Our results show that our top-down unsupervised segmentation is robust to both object-centric and scene-centric datasets.
arXiv Detail & Related papers (2021-12-02T18:59:03Z) - Robust 3D Scene Segmentation through Hierarchical and Learnable
Part-Fusion [9.275156524109438]
3D semantic segmentation is a fundamental building block for several scene understanding applications such as autonomous driving, robotics and AR/VR.
Previous methods have utilized hierarchical, iterative methods to fuse semantic and instance information, but they lack learnability in context fusion.
This paper presents Segment-Fusion, a novel attention-based method for hierarchical fusion of semantic and instance information.
arXiv Detail & Related papers (2021-11-16T13:14:47Z) - Attention-based fusion of semantic boundary and non-boundary information
to improve semantic segmentation [9.518010235273783]
This paper introduces a method for image semantic segmentation grounded on a novel fusion scheme.
The main goal of our proposal is to explore object boundary information to improve the overall segmentation performance.
Our proposed model achieved the best mIoU on the CityScapes, CamVid, and Pascal Context data sets, and the second best on Mapillary Vistas.
arXiv Detail & Related papers (2021-08-05T20:46:53Z) - Three Ways to Improve Semantic Segmentation with Self-Supervised Depth
Estimation [90.87105131054419]
We present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth estimation from unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance gains.
arXiv Detail & Related papers (2020-12-19T21:18:03Z) - Improving Semantic Segmentation via Decoupled Body and Edge Supervision [89.57847958016981]
Existing semantic segmentation approaches either aim to improve the object's inner consistency by modeling the global context, or refine objects detail along their boundaries by multi-scale feature fusion.
In this paper, a new paradigm for semantic segmentation is proposed.
Our insight is that appealing performance of semantic segmentation requires textitexplicitly modeling the object textitbody and textitedge, which correspond to the high and low frequency of the image.
We show that the proposed framework with various baselines or backbone networks leads to better object inner consistency and object boundaries.
arXiv Detail & Related papers (2020-07-20T12:11:22Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.