A Novel Intrinsic Image Decomposition Method to Recover Albedo for
Aerial Images in Photogrammetry Processing
- URL: http://arxiv.org/abs/2204.04142v1
- Date: Fri, 8 Apr 2022 15:50:52 GMT
- Title: A Novel Intrinsic Image Decomposition Method to Recover Albedo for
Aerial Images in Photogrammetry Processing
- Authors: Shuang Song and Rongjun Qin
- Abstract summary: Surface albedos from photogrammetric images can facilitate its downstream applications in VR/AR/MR and digital twins.
Standard photogrammetric pipelines are suboptimal to these applications because these textures are directly derived from images.
We propose an image formation model with respect to outdoor aerial imagery under natural illumination conditions.
We then, derived the inverse model to estimate the albedo by utilizing the typical photogrammetric products as an initial approximation of the geometry.
- Score: 3.556015072520384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering surface albedos from photogrammetric images for realistic
rendering and synthetic environments can greatly facilitate its downstream
applications in VR/AR/MR and digital twins. The textured 3D models from
standard photogrammetric pipelines are suboptimal to these applications because
these textures are directly derived from images, which intrinsically embedded
the spatially and temporally variant environmental lighting information, such
as the sun illumination, direction, causing different looks of the surface,
making such models less realistic when used in 3D rendering under synthetic
lightings. On the other hand, since albedo images are less variable by
environmental lighting, it can, in turn, benefit basic photogrammetric
processing. In this paper, we attack the problem of albedo recovery for aerial
images for the photogrammetric process and demonstrate the benefit of albedo
recovery for photogrammetry data processing through enhanced feature matching
and dense matching. To this end, we proposed an image formation model with
respect to outdoor aerial imagery under natural illumination conditions; we
then, derived the inverse model to estimate the albedo by utilizing the typical
photogrammetric products as an initial approximation of the geometry. The
estimated albedo images are tested in intrinsic image decomposition,
relighting, feature matching, and dense matching/point cloud generation
results. Both synthetic and real-world experiments have demonstrated that our
method outperforms existing methods and can enhance photogrammetric processing.
Related papers
- A General Albedo Recovery Approach for Aerial Photogrammetric Images through Inverse Rendering [7.874736360019618]
This paper presents a general image formation model for albedo recovery from typical aerial photogrammetric images under natural illuminations.
Our approach builds on the fact that both the sun illumination and scene geometry are estimable in aerial photogrammetry.
arXiv Detail & Related papers (2024-09-04T18:58:32Z) - Holistic Inverse Rendering of Complex Facade via Aerial 3D Scanning [38.72679977945778]
We use multi-view aerial images to reconstruct the geometry, lighting, and material of facades using neural signed distance fields (SDFs)
The experiment demonstrates the superior quality of our method on facade holistic inverse rendering, novel view synthesis, and scene editing compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-11-20T15:03:56Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Leveraging Photogrammetric Mesh Models for Aerial-Ground Feature Point
Matching Toward Integrated 3D Reconstruction [19.551088857830944]
Integration of aerial and ground images has been proved as an efficient approach to enhance the surface reconstruction in urban environments.
Previous studies based on geometry-aware image rectification have alleviated this problem.
We propose a novel approach: leveraging photogrammetric mesh models for aerial-ground image matching.
arXiv Detail & Related papers (2020-02-21T01:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.