Neural Camera Simulators
- URL: http://arxiv.org/abs/2104.05237v1
- Date: Mon, 12 Apr 2021 07:06:27 GMT
- Title: Neural Camera Simulators
- Authors: Hao Ouyang, Zifan Shi, Chenyang Lei, Ka Lung Law and Qifeng Chen
- Abstract summary: We present a controllable camera simulator based on deep neural networks to synthesize raw image data under different camera settings.
The proposed simulator includes an exposure module that utilizes the principle of modern lens designs for correcting the luminance level.
It also contains a noise module using the noise level function and an aperture module with adaptive attention to simulate the side effects on noise and defocus blur.
- Score: 39.4597887323609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a controllable camera simulator based on deep neural networks to
synthesize raw image data under different camera settings, including exposure
time, ISO, and aperture. The proposed simulator includes an exposure module
that utilizes the principle of modern lens designs for correcting the luminance
level. It also contains a noise module using the noise level function and an
aperture module with adaptive attention to simulate the side effects on noise
and defocus blur. To facilitate the learning of a simulator model, we collect a
dataset of the 10,000 raw images of 450 scenes with different exposure
settings. Quantitative experiments and qualitative comparisons show that our
approach outperforms relevant baselines in raw data synthesize on multiple
cameras. Furthermore, the camera simulator enables various applications,
including large-aperture enhancement, HDR, auto exposure, and data augmentation
for training local feature detectors. Our work represents the first attempt to
simulate a camera sensor's behavior leveraging both the advantage of
traditional raw sensor features and the power of data-driven deep learning.
Related papers
- Let There Be Light: Robust Lensless Imaging Under External Illumination With Deep Learning [7.368155086339779]
Lensless cameras relax the design constraints of traditional cameras by shifting image formation from analog optics to digital post-processing.
While new camera designs and applications can be enabled, lensless imaging is very sensitive to unwanted interference (other sources, noise, etc.)
arXiv Detail & Related papers (2024-09-25T09:24:53Z) - Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the
Noise Model [83.9497193551511]
We introduce Lighting Every Darkness (LED), which is effective regardless of the digital gain or the camera sensor.
LED eliminates the need for explicit noise model calibration, instead utilizing an implicit fine-tuning process that allows quick deployment and requires minimal data.
LED also allows researchers to focus more on deep learning advancements while still utilizing sensor engineering benefits.
arXiv Detail & Related papers (2023-08-07T10:09:11Z) - Dynamic Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images [1.6451639748812472]
We show that NeRF can be used to render synthetic views from arbitrary camera positions in the operating room.
We show that regularisation with depth supervision from RGB-D sensor data results in higher image quality.
Our results show the potential of a dynamic NeRF for view synthesis in the OR and stress the relevance of depth supervision in a clinical setting.
arXiv Detail & Related papers (2022-11-22T17:45:06Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network [1.4095425725284465]
We present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets.
It includes a new frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics.
We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art.
arXiv Detail & Related papers (2021-12-17T08:27:13Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Noise-Aware Merging of High Dynamic Range Image Stacks without Camera
Calibration [14.715418812634939]
We show that an unbiased estimation of comparable variance can be obtained with a simpler Poisson noise estimator.
We demonstrate this empirically for four different cameras, ranging from a smartphone camera to a full-frame mirrorless camera.
arXiv Detail & Related papers (2020-09-16T23:26:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.