Zero-Reference Image Restoration for Under-Display Camera of UAV
- URL: http://arxiv.org/abs/2202.06283v1
- Date: Sun, 13 Feb 2022 11:12:00 GMT
- Title: Zero-Reference Image Restoration for Under-Display Camera of UAV
- Authors: Zhuoran Zheng, Xiuyi Jia and Yunliang Zhuang
- Abstract summary: We propose a new method to enhance the visual experience by enhancing the texture and color of images.
Our method trains a lightweight network to estimate a low-rank affine grid on the input image.
Our model can perform high-quality recovery of images of arbitrary resolution in real time.
- Score: 10.498049147922258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The exposed cameras of UAV can shake, shift, or even malfunction under the
influence of harsh weather, while the add-on devices (Dupont lines) are very
vulnerable to damage.
We can place a low-cost T-OLED overlay around the camera to protect it, but
this would also introduce image degradation issues.
In particular, the temperature variations in the atmosphere can create mist
that adsorbs to the T-OLED, which can cause secondary disasters (i.e., more
severe image degradation) during the UAV's filming process.
To solve the image degradation problem caused by overlaying T-OLEDs, in this
paper we propose a new method to enhance the visual experience by enhancing the
texture and color of images.
Specifically, our method trains a lightweight network to estimate a low-rank
affine grid on the input image, and then utilizes the grid to enhance the input
image at block granularity.
The advantages of our method are that no reference image is required and the
loss function is developed from visual experience.
In addition, our model can perform high-quality recovery of images of
arbitrary resolution in real time.
In the end, the limitations of our model and the collected datasets
(including the daytime and nighttime scenes) are discussed.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition [78.50328335703914]
Diffusion in the Dark (DiD) is a diffusion model for low-light image reconstruction for text recognition.
We demonstrate that DiD, without any task-specific optimization, can outperform SOTA low-light methods in low-light text recognition on real images.
arXiv Detail & Related papers (2023-03-07T23:52:51Z) - See Blue Sky: Deep Image Dehaze Using Paired and Unpaired Training
Images [73.23687409870656]
We propose a cycle generative adversarial network to construct a novel end-to-end image dehaze model.
We adopt outdoor image datasets to train our model, which includes a set of real-world unpaired image dataset and a set of paired image dataset.
Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss.
arXiv Detail & Related papers (2022-10-14T07:45:33Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - ISP-Agnostic Image Reconstruction for Under-Display Cameras [30.49487402693437]
Under-display cameras have been proposed in recent years as a way to reduce the form factor of mobile devices while maximizing the screen area.
placing the camera behind the screen results in significant image distortions, including loss of contrast, blur, noise, color shift, scattering artifacts, and reduced light sensitivity.
We propose an image-restoration pipeline that is ISP-agnostic, i.e. it can be combined with any legacy ISP to produce a final image that matches the appearance of regular cameras using the same ISP.
arXiv Detail & Related papers (2021-11-02T11:30:13Z) - TSN-CA: A Two-Stage Network with Channel Attention for Low-Light Image
Enhancement [11.738203047278848]
We propose a Two-Stage Network with Channel Attention (denoted as TSN-CA) to enhance the brightness of the low-light image.
We conduct extensive experiments to demonstrate that our method achieves excellent effect on brightness enhancement as well as denoising, details preservation and halo artifacts elimination.
arXiv Detail & Related papers (2021-10-06T03:20:18Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z) - Deep Atrous Guided Filter for Image Restoration in Under Display Cameras [18.6418313982586]
Under Display Cameras present a promising opportunity for phone manufacturers to achieve bezel-free displays by positioning the camera behind semi-transparent OLED screens.
Such imaging systems suffer from severe image degradation due to light attenuation and diffraction effects.
We present Deep Atrous Guided Filter (DAGF), a two-stage, end-to-end approach for image restoration in UDC systems.
arXiv Detail & Related papers (2020-08-14T07:54:52Z) - Low-light Image Restoration with Short- and Long-exposure Raw Pairs [14.643663950015334]
We propose a new low-light image restoration method by using the complementary information of short- and long-exposure images.
We first propose a novel data generation method to synthesize realistic short- and longexposure raw images.
Then, we design a new long-short-exposure fusion network (LSFNet) to deal with the problems of low-light image fusion.
arXiv Detail & Related papers (2020-07-01T03:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.