ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering
- URL: http://arxiv.org/abs/2305.02103v1
- Date: Wed, 3 May 2023 13:24:06 GMT
- Title: ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering
- Authors: Andrea Ramazzina, Mario Bijelic, Stefanie Walz, Alessandro Sanvito,
Dominik Scheuble and Felix Heide
- Abstract summary: We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
- Score: 83.75284107397003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision in adverse weather conditions, whether it be snow, rain, or fog is
challenging. In these scenarios, scattering and attenuation severly degrades
image quality. Handling such inclement weather conditions, however, is
essential to operate autonomous vehicles, drones and robotic applications where
human performance is impeded the most. A large body of work explores removing
weather-induced image degradations with dehazing methods. Most methods rely on
single images as input and struggle to generalize from synthetic
fully-supervised training approaches or to generate high fidelity results from
unpaired real-world datasets. With data as bottleneck and most of today's
training data relying on good weather conditions with inclement weather as
outlier, we rely on an inverse rendering approach to reconstruct the scene
content. We introduce ScatterNeRF, a neural rendering method which adequately
renders foggy scenes and decomposes the fog-free background from the
participating media-exploiting the multiple views from a short automotive
sequence without the need for a large training data corpus. Instead, the
rendering approach is optimized on the multi-view scene itself, which can be
typically captured by an autonomous vehicle, robot or drone during operation.
Specifically, we propose a disentangled representation for the scattering
volume and the scene objects, and learn the scene reconstruction with
physics-inspired losses. We validate our method by capturing multi-view
In-the-Wild data and controlled captures in a large-scale fog chamber.
Related papers
- SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior [53.52396082006044]
Current methods struggle to maintain rendering quality at the viewpoint that deviates significantly from the training viewpoints.
This issue stems from the sparse training views captured by a fixed camera on a moving vehicle.
We propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model.
arXiv Detail & Related papers (2024-03-29T09:20:29Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - ViewNeRF: Unsupervised Viewpoint Estimation Using Category-Level Neural
Radiance Fields [35.89557494372891]
We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method.
Our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder.
Our model shows competitive results on synthetic and real datasets.
arXiv Detail & Related papers (2022-12-01T11:16:11Z) - DiffDreamer: Towards Consistent Unsupervised Single-view Scene
Extrapolation with Conditional Diffusion Models [91.94566873400277]
DiffDreamer is an unsupervised framework capable of synthesizing novel views depicting a long camera trajectory.
We show that image-conditioned diffusion models can effectively perform long-range scene extrapolation while preserving consistency significantly better than prior GAN-based methods.
arXiv Detail & Related papers (2022-11-22T10:06:29Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - ZeroScatter: Domain Transfer for Long Distance Imaging and Vision
through Scattering Media [26.401067775059154]
We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes.
We assess the proposed method using real-world captures, and the proposed method outperforms existing monocular de-scattering approaches by 2.8 dB PSNR on controlled fog chamber measurements.
arXiv Detail & Related papers (2021-02-11T04:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.