Building Footprint Extraction with Graph Convolutional Network
- URL: http://arxiv.org/abs/2305.04499v1
- Date: Mon, 8 May 2023 06:50:05 GMT
- Title: Building Footprint Extraction with Graph Convolutional Network
- Authors: Yilei Shi, Qinyu Li, Xiaoxiang Zhu
- Abstract summary: Building footprint information is an essential ingredient for 3-D reconstruction of urban models.
Recent developments in deep convolutional neural networks (DCNNs) have enabled accurate pixel-level labeling tasks.
In this work, we have proposed a end-to-end framework to overcome this issue, which uses the graph convolutional network (GCN) for building footprint extraction task.
- Score: 20.335884170850193
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Building footprint information is an essential ingredient for 3-D
reconstruction of urban models. The automatic generation of building footprints
from satellite images presents a considerable challenge due to the complexity
of building shapes. Recent developments in deep convolutional neural networks
(DCNNs) have enabled accurate pixel-level labeling tasks. One central issue
remains, which is the precise delineation of boundaries. Deep architectures
generally fail to produce fine-grained segmentation with accurate boundaries
due to progressive downsampling. In this work, we have proposed a end-to-end
framework to overcome this issue, which uses the graph convolutional network
(GCN) for building footprint extraction task. Our proposed framework
outperforms state-of-the-art methods.
Related papers
- Enhancing Polygonal Building Segmentation via Oriented Corners [0.3749861135832072]
This paper introduces a novel deep convolutional neural network named OriCornerNet, which directly extracts delineated building polygons from input images.
Our approach involves a deep model that predicts building footprint masks, corners, and orientation vectors that indicate directions toward adjacent corners.
Performance evaluations conducted on SpaceNet Vegas and CrowdAI-small datasets demonstrate the competitive efficacy of our approach.
arXiv Detail & Related papers (2024-07-17T01:59:06Z) - TFNet: Tuning Fork Network with Neighborhood Pixel Aggregation for
Improved Building Footprint Extraction [11.845097068829551]
We propose a novel tuning Fork Network (TFNet) design for deep semantic segmentation.
The TFNet design is coupled with a novel methodology of incorporating neighborhood information at the tile boundaries during the training process.
For performance comparisons, we utilize the SpaceNet2 and WHU datasets, as well as a dataset from an area in Lahore, Pakistan that captures closely connected buildings.
arXiv Detail & Related papers (2023-11-05T10:52:16Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Hierarchical Graph Networks for 3D Human Pose Estimation [50.600944798627786]
Recent 2D-to-3D human pose estimation works tend to utilize the graph structure formed by the topology of the human skeleton.
We argue that this skeletal topology is too sparse to reflect the body structure and suffer from serious 2D-to-3D ambiguity problem.
We propose a novel graph convolution network architecture, Hierarchical Graph Networks, to overcome these weaknesses.
arXiv Detail & Related papers (2021-11-23T15:09:03Z) - Voxel-based Network for Shape Completion by Leveraging Edge Generation [76.23436070605348]
We develop a voxel-based network for point cloud completion by leveraging edge generation (VE-PCN)
We first embed point clouds into regular voxel grids, and then generate complete objects with the help of the hallucinated shape edges.
This decoupled architecture together with a multi-scale grid feature learning is able to generate more realistic on-surface details.
arXiv Detail & Related papers (2021-08-23T05:10:29Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Building Footprint Generation by IntegratingConvolution Neural Network
with Feature PairwiseConditional Random Field (FPCRF) [21.698236040666675]
Building footprint maps are vital to many remote sensing applications, such as 3D building modeling, urban planning, and disaster management.
In this work, an end-to-end building footprint generation approach that integrates convolution neural network (CNN) and graph model is proposed.
arXiv Detail & Related papers (2020-02-11T18:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.