Seeing through Satellite Images at Street Views
- URL: http://arxiv.org/abs/2505.17001v1
- Date: Thu, 22 May 2025 17:57:32 GMT
- Title: Seeing through Satellite Images at Street Views
- Authors: Ming Qian, Bin Tan, Qiuyu Wang, Xianwei Zheng, Hanjiang Xiong, Gui-Song Xia, Yujun Shen, Nan Xue,
- Abstract summary: We formulate neural radiance field from paired images captured from satellite and street viewpoints.<n>We tackle the challenges based on a task-specific observation that street-view specific elements, including the sky and illumination effects are only visible in street-view panoramas.<n>We present a novel approach Sat2Density++ to accomplish the goal of photo-realistic street-view panoramas rendering.
- Score: 46.96799097647916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the task of SatStreet-view synthesis, which aims to render photorealistic street-view panorama images and videos given any satellite image and specified camera positions or trajectories. We formulate to learn neural radiance field from paired images captured from satellite and street viewpoints, which comes to be a challenging learning problem due to the sparse-view natural and the extremely-large viewpoint changes between satellite and street-view images. We tackle the challenges based on a task-specific observation that street-view specific elements, including the sky and illumination effects are only visible in street-view panoramas, and present a novel approach Sat2Density++ to accomplish the goal of photo-realistic street-view panoramas rendering by modeling these street-view specific in neural networks. In the experiments, our method is testified on both urban and suburban scene datasets, demonstrating that Sat2Density++ is capable of rendering photorealistic street-view panoramas that are consistent across multiple views and faithful to the satellite image.
Related papers
- Satellite to GroundScape -- Large-scale Consistent Ground View Generation from Satellite Views [5.146618378243241]
We propose a novel cross-view synthesis approach designed to ensure consistency across ground-view images generated from satellite views.<n>Our method, based on a fixed latent diffusion model, introduces two conditioning modules: satellite-guided denoising and satellite-temporal denoising.<n>We contribute a large-scale satellite-ground dataset containing over 100,000 perspective pairs to facilitate extensive ground scene or video generation.
arXiv Detail & Related papers (2025-04-22T10:58:42Z) - CrossViewDiff: A Cross-View Diffusion Model for Satellite-to-Street View Synthesis [54.852701978617056]
CrossViewDiff is a cross-view diffusion model for satellite-to-street view synthesis.
To address the challenges posed by the large discrepancy across views, we design the satellite scene structure estimation and cross-view texture mapping modules.
To achieve a more comprehensive evaluation of the synthesis results, we additionally design a GPT-based scoring method.
arXiv Detail & Related papers (2024-08-27T03:41:44Z) - Leveraging BEV Paradigm for Ground-to-Aerial Image Synthesis [14.492759165786364]
Ground-to-aerial image synthesis focuses on generating realistic aerial images from corresponding ground street view images.<n>We introduce SkyDiffusion, a novel cross-view generation method for synthesizing aerial images from street view images.<n>We introduce a novel dataset, Ground2Aerial-3, designed for diverse ground-to-aerial image synthesis applications.
arXiv Detail & Related papers (2024-08-03T15:43:56Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition [17.660328148833134]
SUNDIAL is a comprehensive approach to 3D reconstruction of satellite imagery using neural radiance fields.
We learn satellite scene geometry, illumination components, and sun direction in this single-model approach.
We evaluate the performance of SUNDIAL against existing NeRF-based techniques for satellite scene modeling.
arXiv Detail & Related papers (2023-12-24T02:46:44Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Repopulating Street Scenes [59.2621467759251]
We present a framework for automatically reconfiguring images of street scenes by populating, depopulating, or repopulating them with objects such as pedestrians or vehicles.
Applications of this method include anonymizing images to enhance privacy, generating data augmentations for perception tasks like autonomous driving.
arXiv Detail & Related papers (2021-03-30T09:04:46Z) - Coming Down to Earth: Satellite-to-Street View Synthesis for
Geo-Localization [9.333087475006003]
Cross-view image based geo-localization is notoriously challenging due to drastic viewpoint and appearance differences between the two domains.
We show that we can address this discrepancy explicitly by learning to synthesize realistic street views from satellite inputs.
We propose a novel multi-task architecture in which image synthesis and retrieval are considered jointly.
arXiv Detail & Related papers (2021-03-11T17:40:59Z) - Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery [80.6282101835164]
We present a new approach for synthesizing a novel street-view panorama given an overhead satellite image.
Our method generates a Google's omnidirectional street-view type panorama, as if it is captured from the same geographical location as the center of the satellite patch.
arXiv Detail & Related papers (2021-03-02T10:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.