OpenFACADES: An Open Framework for Architectural Caption and Attribute Data Enrichment via Street View Imagery
- URL: http://arxiv.org/abs/2504.02866v1
- Date: Tue, 01 Apr 2025 08:20:13 GMT
- Title: OpenFACADES: An Open Framework for Architectural Caption and Attribute Data Enrichment via Street View Imagery
- Authors: Xiucheng Liang, Jinheng Xie, Tianhong Zhao, Rudi Stouffs, Filip Biljecki,
- Abstract summary: Building properties play a crucial role in spatial data infrastructures, supporting applications such as energy simulation, risk assessment, and environmental modeling.<n>Recent advances have enabled the extraction and tagging of objective building attributes using remote sensing and street-level imagery.<n>This study bridges the gaps by introducing OpenFACADES, an open framework that leverages crowdsourced data to enrich building profiles.
- Score: 4.33299613844962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building properties, such as height, usage, and material composition, play a crucial role in spatial data infrastructures, supporting applications such as energy simulation, risk assessment, and environmental modeling. Despite their importance, comprehensive and high-quality building attribute data remain scarce in many urban areas. Recent advances have enabled the extraction and tagging of objective building attributes using remote sensing and street-level imagery. However, establishing a method and pipeline that integrates diverse open datasets, acquires holistic building imagery at scale, and infers comprehensive building attributes remains a significant challenge. Among the first, this study bridges the gaps by introducing OpenFACADES, an open framework that leverages multimodal crowdsourced data to enrich building profiles with both objective attributes and semantic descriptors through multimodal large language models. Our methodology proceeds in three major steps. First, we integrate street-level image metadata from Mapillary with OpenStreetMap geometries via isovist analysis, effectively identifying images that provide suitable vantage points for observing target buildings. Second, we automate the detection of building facades in panoramic imagery and tailor a reprojection approach to convert objects into holistic perspective views that approximate real-world observation. Third, we introduce an innovative approach that harnesses and systematically investigates the capabilities of open-source large vision-language models (VLMs) for multi-attribute prediction and open-vocabulary captioning in building-level analytics, leveraging a globally sourced dataset of 30,180 labeled images from seven cities. Evaluation shows that fine-tuned VLM excel in multi-attribute inference, outperforming single-attribute computer vision models and zero-shot ChatGPT-4o.
Related papers
- IAAO: Interactive Affordance Learning for Articulated Objects in 3D Environments [56.85804719947]
We present IAAO, a framework that builds an explicit 3D model for intelligent agents to gain understanding of articulated objects in their environment through interaction.
We first build hierarchical features and label fields for each object state using 3D Gaussian Splatting (3DGS) by distilling mask features and view-consistent labels from multi-view images.
We then perform object- and part-level queries on the 3D Gaussian primitives to identify static and articulated elements, estimating global transformations and local articulation parameters along with affordances.
arXiv Detail & Related papers (2025-04-09T12:36:48Z) - Exploiting Semantic Scene Reconstruction for Estimating Building Envelope Characteristics [6.382787013075262]
We propose BuildNet3D, a novel framework to estimate geometric building characteristics from 2D image inputs.
Our framework is evaluated on a range of complex building structures, demonstrating high accuracy and generalizability in estimating window-to-wall ratio and building footprint.
arXiv Detail & Related papers (2024-10-29T13:29:01Z) - ARMADA: Attribute-Based Multimodal Data Augmentation [93.05614922383822]
Attribute-based Multimodal Data Augmentation (ARMADA) is a novel multimodal data augmentation method via knowledge-guided manipulation of visual attributes.
ARMADA is a novel multimodal data generation framework that: (i) extracts knowledge-grounded attributes from symbolic KBs for semantically consistent yet distinctive image-text pair generation.
This also highlights the need to leverage external knowledge proxies for enhanced interpretability and real-world grounding.
arXiv Detail & Related papers (2024-08-19T15:27:25Z) - Fine-Grained Building Function Recognition from Street-View Images via Geometry-Aware Semi-Supervised Learning [18.432786227782803]
We propose a geometry-aware semi-supervised framework for fine-grained building function recognition.
We use geometric relationships among multi-source data to enhance pseudo-label accuracy in semi-supervised learning.
Our proposed framework exhibits superior performance in fine-grained functional recognition of buildings.
arXiv Detail & Related papers (2024-08-18T12:48:48Z) - CMAB: A First National-Scale Multi-Attribute Building Dataset in China Derived from Open Source Data and GeoAI [1.3586572110652484]
This paper presents the first national-scale Multi-Attribute Building dataset (CMAB) covering 3,667 spatial cities, 29 million buildings, and 21.3 billion square meters of rooftops.
Using billions of high-resolution Google Earth images and 60 million street view images (SVIs), we generated rooftop, height, function, age, and quality attributes for each building.
Our dataset and results are crucial for global SDGs and urban planning.
arXiv Detail & Related papers (2024-08-12T02:09:25Z) - MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.
The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Raising the Bar of AI-generated Image Detection with CLIP [50.345365081177555]
The aim of this work is to explore the potential of pre-trained vision-language models (VLMs) for universal detection of AI-generated images.
We develop a lightweight detection strategy based on CLIP features and study its performance in a wide variety of challenging scenarios.
arXiv Detail & Related papers (2023-11-30T21:11:20Z) - Building Extraction from Remote Sensing Images via an Uncertainty-Aware
Network [18.365220543556113]
Building extraction plays an essential role in many applications, such as city planning and urban dynamic monitoring.
We propose a novel and straightforward Uncertainty-Aware Network (UANet) to alleviate this problem.
Results demonstrate that the proposed UANet outperforms other state-of-the-art algorithms by a large margin.
arXiv Detail & Related papers (2023-07-23T12:42:15Z) - Semi-supervised Learning from Street-View Images and OpenStreetMap for
Automatic Building Height Estimation [59.6553058160943]
We propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OpenStreetMap data.
The proposed method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters.
The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data.
arXiv Detail & Related papers (2023-07-05T18:16:30Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.