Decoupling Degradations with Recurrent Network for Video Restoration in
Under-Display Camera
- URL: http://arxiv.org/abs/2403.05660v1
- Date: Fri, 8 Mar 2024 20:21:45 GMT
- Title: Decoupling Degradations with Recurrent Network for Video Restoration in
Under-Display Camera
- Authors: Chengxu Liu, Xuan Wang, Yuanting Fan, Shuai Li and Xueming Qian
- Abstract summary: Under-display camera (UDC) systems are the foundation of full-screen display devices in which the lens mounts under the display.
We introduce a novel video restoration network, called D$2$RNet, specifically designed for UDC systems.
It employs a set of Decoupling Attention Modules (DAM) that effectively separate the various video degradation factors.
- Score: 24.330832680171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under-display camera (UDC) systems are the foundation of full-screen display
devices in which the lens mounts under the display. The pixel array of
light-emitting diodes used for display diffracts and attenuates incident light,
causing various degradations as the light intensity changes. Unlike general
video restoration which recovers video by treating different degradation
factors equally, video restoration for UDC systems is more challenging that
concerns removing diverse degradation over time while preserving temporal
consistency. In this paper, we introduce a novel video restoration network,
called D$^2$RNet, specifically designed for UDC systems. It employs a set of
Decoupling Attention Modules (DAM) that effectively separate the various video
degradation factors. More specifically, a soft mask generation function is
proposed to formulate each frame into flare and haze based on the diffraction
arising from incident light of different intensities, followed by the proposed
flare and haze removal components that leverage long- and short-term feature
learning to handle the respective degradations. Such a design offers an
targeted and effective solution to eliminating various types of degradation in
UDC systems. We further extend our design into multi-scale to overcome the
scale-changing of degradation that often occur in long-range videos. To
demonstrate the superiority of D$^2$RNet, we propose a large-scale UDC video
benchmark by gathering HDR videos and generating realistically degraded videos
using the point spread function measured by a commercial UDC system. Extensive
quantitative and qualitative evaluations demonstrate the superiority of
D$^2$RNet compared to other state-of-the-art video restoration and UDC image
restoration methods. Code is available at
https://github.com/ChengxuLiu/DDRNet.git
Related papers
- DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models [9.145545884814327]
This paper introduces a method for zero-shot video restoration using pre-trained image restoration diffusion models.
We show that our method achieves top performance in zero-shot video restoration.
Our technique works with any 2D restoration diffusion model, offering a versatile and powerful tool for video enhancement tasks without extensive retraining.
arXiv Detail & Related papers (2024-07-01T17:59:12Z) - Deep Video Restoration for Under-Display Camera [98.17505013737446]
We propose a GAN-based generation pipeline to simulate the realistic UDC degradation process.
We build the first large-scale UDC video restoration dataset called PexelsUDC.
We propose a novel transformer-based baseline method that adaptively enhances degraded videos.
arXiv Detail & Related papers (2023-09-09T10:48:06Z) - Cross-Consistent Deep Unfolding Network for Adaptive All-In-One Video
Restoration [78.14941737723501]
We propose a Cross-consistent Deep Unfolding Network (CDUN) for All-In-One VR.
By orchestrating two cascading procedures, CDUN achieves adaptive processing for diverse degradations.
In addition, we introduce a window-based inter-frame fusion strategy to utilize information from more adjacent frames.
arXiv Detail & Related papers (2023-09-04T14:18:00Z) - Modular Degradation Simulation and Restoration for Under-Display Camera [21.048590332029995]
Under-display camera (UDC) provides an elegant solution for full-screen smartphones.
UDC captured images suffer from severe degradation since sensors lie under the display.
We propose a modular network dubbed MPGNet trained using the generative adversarial network (GAN) framework for simulating UDC imaging.
arXiv Detail & Related papers (2022-09-23T07:36:07Z) - UDC-UNet: Under-Display Camera Image Restoration via U-Shape Dynamic
Network [13.406025621307132]
Under-Display Camera (UDC) has been widely exploited to help smartphones realize full screen display.
As the screen could inevitably affect the light propagation process, the images captured by the UDC system usually contain flare, haze, blur, and noise.
In this paper, we propose a new deep model, namely UDC-UNet, to address the UDC image restoration problem with the known Point Spread Function (PSF) in HDR scenes.
arXiv Detail & Related papers (2022-09-05T07:41:44Z) - Recurrent Video Restoration Transformer with Guided Deformable Attention [116.1684355529431]
We propose RVRT, which processes local neighboring frames in parallel within a globally recurrent framework.
RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime.
arXiv Detail & Related papers (2022-06-05T10:36:09Z) - Learning Trajectory-Aware Transformer for Video Super-Resolution [50.49396123016185]
Video super-resolution aims to restore a sequence of high-resolution (HR) frames from their low-resolution (LR) counterparts.
Existing approaches usually align and aggregate video frames from limited adjacent frames.
We propose a novel Transformer for Video Super-Resolution (TTVSR)
arXiv Detail & Related papers (2022-04-08T03:37:39Z) - VRT: A Video Restoration Transformer [126.79589717404863]
Video restoration (e.g., video super-resolution) aims to restore high-quality frames from low-quality frames.
We propose a Video Restoration Transformer (VRT) with parallel frame prediction and long-range temporal dependency modelling abilities.
arXiv Detail & Related papers (2022-01-28T17:54:43Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z) - Image Restoration for Under-Display Camera [14.209602483950322]
The new trend of full-screen devices encourages us to position a camera behind a screen.
Removing the bezel and centralizing the camera under the screen brings larger display-to-body ratio and enhances eye contact in video chat, but also causes image degradation.
In this paper, we focus on a newly-defined Under-Display Camera (UDC), as a novel real-world single image restoration problem.
arXiv Detail & Related papers (2020-03-10T17:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.