Boundary Regularized Building Footprint Extraction From Satellite Images
Using Deep Neural Network
- URL: http://arxiv.org/abs/2006.13176v1
- Date: Tue, 23 Jun 2020 17:24:09 GMT
- Title: Boundary Regularized Building Footprint Extraction From Satellite Images
Using Deep Neural Network
- Authors: Kang Zhao, Muhammad Kamran, Gunho Sohn
- Abstract summary: We propose a novel deep neural network, which enables to jointly detect building instance and regularize noisy building boundary shapes from a single satellite imagery.
Our model can accomplish multi-tasks of object localization, recognition, semantic labelling and geometric shape extraction simultaneously.
- Score: 6.371173732947292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, an ever-increasing number of remote satellites are orbiting
the Earth which streams vast amount of visual data to support a wide range of
civil, public and military applications. One of the key information obtained
from satellite imagery is to produce and update spatial maps of built
environment due to its wide coverage with high resolution data. However,
reconstructing spatial maps from satellite imagery is not a trivial vision task
as it requires reconstructing a scene or object with high-level representation
such as primitives. For the last decade, significant advancement in object
detection and representation using visual data has been achieved, but the
primitive-based object representation still remains as a challenging vision
task. Thus, a high-quality spatial map is mainly produced through complex
labour-intensive processes. In this paper, we propose a novel deep neural
network, which enables to jointly detect building instance and regularize noisy
building boundary shapes from a single satellite imagery. The proposed deep
learning method consists of a two-stage object detection network to produce
region of interest (RoI) features and a building boundary extraction network
using graph models to learn geometric information of the polygon shapes.
Extensive experiments show that our model can accomplish multi-tasks of object
localization, recognition, semantic labelling and geometric shape extraction
simultaneously. In terms of building extraction accuracy, computation
efficiency and boundary regularization performance, our model outperforms the
state-of-the-art baseline models.
Related papers
- Geospecific View Generation -- Geometry-Context Aware High-resolution Ground View Inference from Satellite Views [5.146618378243241]
We propose a novel pipeline to generate geospecifc views that maximally respect the weak geometry and texture from multi-view satellite images.
Our method directly predicts ground-view images at geolocation by using a comprehensive set of information from the satellite image.
We demonstrate our pipeline is the first to generate close-to-real and geospecific ground views merely based on satellite images.
arXiv Detail & Related papers (2024-07-10T21:51:50Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - DiffusionSat: A Generative Foundation Model for Satellite Imagery [63.2807119794691]
We present DiffusionSat, to date the largest generative foundation model trained on a collection of publicly available large, high-resolution remote sensing datasets.
Our method produces realistic samples and can be used to solve multiple generative tasks including temporal generation, superresolution given multi-spectral inputs and in-painting.
arXiv Detail & Related papers (2023-12-06T16:53:17Z) - Feature Aggregation Network for Building Extraction from High-resolution
Remote Sensing Images [1.7623838912231695]
High-resolution satellite remote sensing data acquisition has uncovered the potential for detailed extraction of surface architectural features.
Current methods focus exclusively on localized information of surface features.
We propose the Feature Aggregation Network (FANet), concentrating on extracting both global and local features.
arXiv Detail & Related papers (2023-09-12T07:31:51Z) - Building Extraction from Remote Sensing Images via an Uncertainty-Aware
Network [18.365220543556113]
Building extraction plays an essential role in many applications, such as city planning and urban dynamic monitoring.
We propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem.
Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
arXiv Detail & Related papers (2023-07-23T12:42:15Z) - Progressive Domain Adaptation with Contrastive Learning for Object
Detection in the Satellite Imagery [0.0]
State-of-the-art object detection methods largely fail to identify small and dense objects.
We propose a small object detection pipeline that improves the feature extraction process.
We show we can alleviate the degradation of object identification in previously unseen datasets.
arXiv Detail & Related papers (2022-09-06T15:16:35Z) - Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image
Matching [102.39635336450262]
We address the problem of ground-to-satellite image geo-localization by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image.
arXiv Detail & Related papers (2022-03-26T20:10:38Z) - Sci-Net: a Scale Invariant Model for Building Detection from Aerial
Images [0.0]
We propose a Scale-invariant neural network (Sci-Net) that is able to segment buildings present in aerial images at different spatial resolutions.
Specifically, we modified the U-Net architecture and fused it with dense Atrous Spatial Pyramid Pooling (ASPP) to extract fine-grained multi-scale representations.
arXiv Detail & Related papers (2021-11-12T16:45:20Z) - Counting from Sky: A Large-scale Dataset for Remote Sensing Object
Counting and A Benchmark Method [52.182698295053264]
We are interested in counting dense objects from remote sensing images. Compared with object counting in a natural scene, this task is challenging in the following factors: large scale variation, complex cluttered background, and orientation arbitrariness.
To address these issues, we first construct a large-scale object counting dataset with remote sensing images, which contains four important geographic objects.
We then benchmark the dataset by designing a novel neural network that can generate a density map of an input image.
arXiv Detail & Related papers (2020-08-28T03:47:49Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Counting dense objects in remote sensing images [52.182698295053264]
Estimating number of interested objects from a given image is a challenging yet important task.
In this paper, we are interested in counting dense objects from remote sensing images.
To address these issues, we first construct a large-scale object counting dataset based on remote sensing images.
We then benchmark the dataset by designing a novel neural network which can generate density map of an input image.
arXiv Detail & Related papers (2020-02-14T09:13:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.