LightCity: An Urban Dataset for Outdoor Inverse Rendering and Reconstruction under Multi-illumination Conditions
- URL: http://arxiv.org/abs/2602.01118v1
- Date: Sun, 01 Feb 2026 09:37:00 GMT
- Title: LightCity: An Urban Dataset for Outdoor Inverse Rendering and Reconstruction under Multi-illumination Conditions
- Authors: Jingjing Wang, Qirui Hu, Chong Bao, Yuke Zhu, Hujun Bao, Zhaopeng Cui, Guofeng Zhang,
- Abstract summary: Inverse rendering in urban scenes is pivotal for applications like autonomous driving and digital twins.<n>Yet, it faces significant challenges due to complex illumination conditions, including multi-illumination and indirect light and shadow effects.<n>We present LightCity, a novel high-quality synthetic urban dataset featuring diverse illumination conditions with realistic indirect light and shadow effects.
- Score: 80.70675855203154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse rendering in urban scenes is pivotal for applications like autonomous driving and digital twins. Yet, it faces significant challenges due to complex illumination conditions, including multi-illumination and indirect light and shadow effects. However, the effects of these challenges on intrinsic decomposition and 3D reconstruction have not been explored due to the lack of appropriate datasets. In this paper, we present LightCity, a novel high-quality synthetic urban dataset featuring diverse illumination conditions with realistic indirect light and shadow effects. LightCity encompasses over 300 sky maps with highly controllable illumination, varying scales with street-level and aerial perspectives over 50K images, and rich properties such as depth, normal, material components, light and indirect light, etc. Besides, we leverage LightCity to benchmark three fundamental tasks in the urban environments and conduct a comprehensive analysis of these benchmarks, laying a robust foundation for advancing related research.
Related papers
- HeatMat: Simulation of City Material Impact on Urban Heat Island Effect [5.9791504486574425]
The Urban Heat Island (UHI) effect is a significant increase in temperature in urban environments compared to surrounding areas.<n>Among the factors contributing to this effect are the properties of urban materials, which differ from those in rural areas.<n>We propose HeatMat, an approach to analyze at high resolution the individual impact of urban materials on the UHI effect in a real city.
arXiv Detail & Related papers (2026-01-30T10:20:47Z) - Beyond a Single Light: A Large-Scale Aerial Dataset for Urban Scene Reconstruction Under Varying Illumination [27.470486341807316]
We introduceSkyLume, a dataset specifically designed for studying illumination robust 3D reconstruction in urban scene modeling.<n>We collect data from 10 urban regions data comprising more than 100k high resolution UAV images.<n>We provide per-scene LiDAR scans and accurate 3D ground-truth for assessing depth, surface normals, and reconstruction quality under varying illumination.
arXiv Detail & Related papers (2025-12-16T08:47:56Z) - See through the Dark: Learning Illumination-affined Representations for Nighttime Occupancy Prediction [20.14637361013267]
LIAR is a novel framework that learns illumination-affined representations.<n>Experiments on both real and synthetic datasets demonstrate the superior performance of LIAR under challenging nighttime scenarios.
arXiv Detail & Related papers (2025-05-27T02:40:49Z) - IDArb: Intrinsic Decomposition for Arbitrary Number of Input Views and Illuminations [64.07859467542664]
Capturing geometric and material information from images remains a fundamental challenge in computer vision and graphics.<n>Traditional optimization-based methods often require hours of computational time to reconstruct geometry, material properties, and environmental lighting from dense multi-view inputs.<n>We introduce IDArb, a diffusion-based model designed to perform intrinsic decomposition on an arbitrary number of images under varying illuminations.
arXiv Detail & Related papers (2024-12-16T18:52:56Z) - ReCap: Better Gaussian Relighting with Cross-Environment Captures [51.2614945509044]
We present ReCap, a multi-task system for accurate 3D object relighting in unseen environments.<n>Specifically, ReCap jointly optimize multiple lighting representations that share a common set of material attributes.<n>This naturally harmonizes a coherent set of lighting representations around the mutual material attributes, exploiting commonalities and differences across varied object appearances.<n>Together with a streamlined shading function and effective post-processing, ReCap outperforms all leading competitors on an expanded relighting benchmark.
arXiv Detail & Related papers (2024-12-10T14:15:32Z) - AerialGo: Walking-through City View Generation from Aerial Perspectives [48.53976414257845]
AerialGo is a framework that generates realistic walking-through city views from aerial images.<n>By conditioning ground-view synthesis on accessible aerial data, AerialGo bypasses the privacy risks inherent in ground-level imagery.<n>Experiments show that AerialGo significantly enhances ground-level realism and structural coherence.
arXiv Detail & Related papers (2024-11-29T08:14:07Z) - SUNDIAL: 3D Satellite Understanding through Direct, Ambient, and Complex
Lighting Decomposition [17.660328148833134]
SUNDIAL is a comprehensive approach to 3D reconstruction of satellite imagery using neural radiance fields.
We learn satellite scene geometry, illumination components, and sun direction in this single-model approach.
We evaluate the performance of SUNDIAL against existing NeRF-based techniques for satellite scene modeling.
arXiv Detail & Related papers (2023-12-24T02:46:44Z) - MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering
and Beyond [69.37319723095746]
We build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches.
We develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities.
The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size $28km2$.
arXiv Detail & Related papers (2023-09-28T16:06:02Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.