AerialGo: Walking-through City View Generation from Aerial Perspectives
- URL: http://arxiv.org/abs/2412.00157v1
- Date: Fri, 29 Nov 2024 08:14:07 GMT
- Title: AerialGo: Walking-through City View Generation from Aerial Perspectives
- Authors: Fuqiang Zhao, Yijing Guo, Siyuan Yang, Xi Chen, Luo Wang, Lan Xu, Yingliang Zhang, Yujiao Shi, Jingyi Yu,
- Abstract summary: AerialGo is a framework that generates realistic walking-through city views from aerial images.
By conditioning ground-view synthesis on accessible aerial data, AerialGo bypasses the privacy risks inherent in ground-level imagery.
Experiments show that AerialGo significantly enhances ground-level realism and structural coherence.
- Score: 48.53976414257845
- License:
- Abstract: High-quality 3D urban reconstruction is essential for applications in urban planning, navigation, and AR/VR. However, capturing detailed ground-level data across cities is both labor-intensive and raises significant privacy concerns related to sensitive information, such as vehicle plates, faces, and other personal identifiers. To address these challenges, we propose AerialGo, a novel framework that generates realistic walking-through city views from aerial images, leveraging multi-view diffusion models to achieve scalable, photorealistic urban reconstructions without direct ground-level data collection. By conditioning ground-view synthesis on accessible aerial data, AerialGo bypasses the privacy risks inherent in ground-level imagery. To support the model training, we introduce AerialGo dataset, a large-scale dataset containing diverse aerial and ground-view images, paired with camera and depth information, designed to support generative urban reconstruction. Experiments show that AerialGo significantly enhances ground-level realism and structural coherence, providing a privacy-conscious, scalable solution for city-scale 3D modeling.
Related papers
- Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes [55.15494682493422]
We introduce Horizon-GS, a novel approach built upon Gaussian Splatting techniques, to tackle the unified reconstruction and rendering for aerial and street views.
Our method addresses the key challenges of combining these perspectives with a new training strategy, overcoming viewpoint discrepancies to generate high-fidelity scenes.
arXiv Detail & Related papers (2024-12-02T17:42:00Z) - Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty [10.37108303188536]
3D Gaussian Splatting (3D-GS) has made groundbreaking progress in neural rendering.
The general fidelity of large-scale road scene renderings is often limited by the input imagery.
We introduce the cross-view uncertainty to 3D-GS by matching the car-view ensemble-based rendering uncertainty to aerial images.
arXiv Detail & Related papers (2024-08-27T17:59:55Z) - SkyDiffusion: Ground-to-Aerial Image Synthesis with Diffusion Models and BEV Paradigm [14.492759165786364]
Ground-to-aerial image synthesis focuses on generating realistic aerial images from corresponding ground street view images.
We introduce SkyDiffusion, a novel cross-view generation method for synthesizing aerial images from street view images.
We introduce a novel dataset, Ground2Aerial-3, designed for diverse ground-to-aerial image synthesis applications.
arXiv Detail & Related papers (2024-08-03T15:43:56Z) - Urban Scene Diffusion through Semantic Occupancy Map [49.20779809250597]
UrbanDiffusion is a 3D diffusion model conditioned on a Bird's-Eye View (BEV) map.
Our model learns the data distribution of scene-level structures within a latent space.
After training on real-world driving datasets, our model can generate a wide range of diverse urban scenes.
arXiv Detail & Related papers (2024-03-18T11:54:35Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - UrbanBIS: a Large-scale Benchmark for Fine-grained Urban Building
Instance Segmentation [50.52615875873055]
UrbanBIS comprises six real urban scenes, with 2.5 billion points, covering a vast area of 10.78 square kilometers.
UrbanBIS provides semantic-level annotations on a rich set of urban objects, including buildings, vehicles, vegetation, roads, and bridges.
UrbanBIS is the first 3D dataset that introduces fine-grained building sub-categories.
arXiv Detail & Related papers (2023-05-04T08:01:38Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Deep Learning Guided Building Reconstruction from Satellite
Imagery-derived Point Clouds [39.36437891978871]
We present a reliable and effective approach for building model reconstruction from the point clouds generated from satellite images.
Specifically, a deep-learning approach is adopted to distinguish the shape of building roofs in complex and yet noisy scenes.
As the first effort to address the public need of large scale city model generation, the development is deployed as open source software.
arXiv Detail & Related papers (2020-05-19T05:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.