ARCH2S: Dataset, Benchmark and Challenges for Learning Exterior Architectural Structures from Point Clouds
- URL: http://arxiv.org/abs/2406.01337v1
- Date: Mon, 3 Jun 2024 14:02:23 GMT
- Title: ARCH2S: Dataset, Benchmark and Challenges for Learning Exterior Architectural Structures from Point Clouds
- Authors: Ka Lung Cheung, Chi Chung Lee,
- Abstract summary: This paper introduces a semantically-enriched, photo-realistic 3D architectural models dataset and benchmark for semantic segmentation.
It features 4 different building purposes of real-world buildings as well as an open architectural landscape in Hong Kong.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise segmentation of architectural structures provides detailed information about various building components, enhancing our understanding and interaction with our built environment. Nevertheless, existing outdoor 3D point cloud datasets have limited and detailed annotations on architectural exteriors due to privacy concerns and the expensive costs of data acquisition and annotation. To overcome this shortfall, this paper introduces a semantically-enriched, photo-realistic 3D architectural models dataset and benchmark for semantic segmentation. It features 4 different building purposes of real-world buildings as well as an open architectural landscape in Hong Kong. Each point cloud is annotated into one of 14 semantic classes.
Related papers
- Space3D-Bench: Spatial 3D Question Answering Benchmark [49.259397521459114]
We present Space3D-Bench - a collection of 1000 general spatial questions and answers related to scenes of the Replica dataset.
We provide an assessment system that grades natural language responses based on predefined ground-truth answers.
Finally, we introduce a baseline called RAG3D-Chat integrating the world understanding of foundation models with rich context retrieval.
arXiv Detail & Related papers (2024-08-29T16:05:22Z) - MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.
The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - A Framework for Building Point Cloud Cleaning, Plane Detection and
Semantic Segmentation [0.5439020425818999]
We focus in the cleaning stage on removing outliers from the acquired point cloud data.
Following the cleaning process, we perform plane detection using the robust RANSAC paradigm.
The resulting segments can generate accurate and detailed point clouds representing the building's architectural elements.
arXiv Detail & Related papers (2024-02-01T15:50:40Z) - CityRefer: Geography-aware 3D Visual Grounding Dataset on City-scale
Point Cloud Data [15.526523262690965]
We introduce the CityRefer dataset for city-level visual grounding.
The dataset consists of 35k natural language descriptions of 3D objects appearing in SensatUrban city scenes and 5k landmarks labels synchronizing with OpenStreetMap.
arXiv Detail & Related papers (2023-10-28T18:05:32Z) - Unfinished Architectures: A Perspective from Artificial Intelligence [73.52315464582637]
Development of Artificial Intelligence (AI) opens new avenues for the proposal of possibilities for the completion of unfinished architectures.
Recent appearance of tools such as DALL-E, capable of completing images guided by a textual description.
In this article we explore the use of these new AI tools for the completion of unfinished facades of historical temples and analyse the still germinal stadium in the field of architectural graphic composition.
arXiv Detail & Related papers (2023-03-03T13:05:10Z) - BuildingNet: Learning to Label 3D Buildings [19.641000866952815]
BuildingNet: (a) large-scale 3D building models whose exteriors consistently labeled, (b) a neural network that labels building analyzing and structural relations of their geometric primitives.
The dataset covers categories, such as houses, churches, skyscrapers, town halls and castles.
arXiv Detail & Related papers (2021-10-11T01:45:26Z) - DLA-Net: Learning Dual Local Attention Features for Semantic
Segmentation of Large-Scale Building Facade Point Clouds [14.485540292321257]
We construct the first large-scale building facade point clouds benchmark dataset for semantic segmentation.
We propose a learnable attention module that learns Dual Local Attention features, called DLA in this paper.
arXiv Detail & Related papers (2021-06-01T10:39:11Z) - Synthetic 3D Data Generation Pipeline for Geometric Deep Learning in
Architecture [6.383666639192481]
We create a synthetic data generation pipeline that generates an arbitrary amount of 3D data along with the associated 2D and 3D annotations.
The variety of annotations, the flexibility to customize the generated building and dataset parameters make this framework suitable for multiple deep learning tasks.
All code and data are made publicly available.
arXiv Detail & Related papers (2021-04-26T13:32:03Z) - Semantic Segmentation on Swiss3DCities: A Benchmark Study on Aerial
Photogrammetric 3D Pointcloud Dataset [67.44497676652173]
We introduce a new outdoor urban 3D pointcloud dataset, covering a total area of 2.7 $km2$, sampled from three Swiss cities.
The dataset is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras.
arXiv Detail & Related papers (2020-12-23T21:48:47Z) - Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset,
Benchmarks and Challenges [52.624157840253204]
We present an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points.
Our dataset consists of large areas from three UK cities, covering about 7.6 km2 of the city landscape.
We evaluate the performance of state-of-the-art algorithms on our dataset and provide a comprehensive analysis of the results.
arXiv Detail & Related papers (2020-09-07T14:47:07Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.