Under-Display Camera Image Restoration with Scattering Effect
- URL: http://arxiv.org/abs/2308.04163v1
- Date: Tue, 8 Aug 2023 09:50:44 GMT
- Title: Under-Display Camera Image Restoration with Scattering Effect
- Authors: Binbin Song, Xiangyu Chen, Shuning Xu, and Jiantao Zhou
- Abstract summary: Under-display camera (UDC) provides consumers with a full-screen visual experience without any obstruction due to notches or punched holes.
In this work, we address the UDC image restoration problem with the specific consideration of the scattering effect caused by the display.
We explicitly model the scattering effect by treating the display as a piece of homogeneous scattering medium.
- Score: 17.55639152160472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The under-display camera (UDC) provides consumers with a full-screen visual
experience without any obstruction due to notches or punched holes. However,
the semi-transparent nature of the display inevitably introduces the severe
degradation into UDC images. In this work, we address the UDC image restoration
problem with the specific consideration of the scattering effect caused by the
display. We explicitly model the scattering effect by treating the display as a
piece of homogeneous scattering medium. With the physical model of the
scattering effect, we improve the image formation pipeline for the image
synthesis to construct a realistic UDC dataset with ground truths. To suppress
the scattering effect for the eventual UDC image recovery, a two-branch
restoration network is designed. More specifically, the scattering branch
leverages global modeling capabilities of the channel-wise self-attention to
estimate parameters of the scattering effect from degraded images. While the
image branch exploits the local representation advantage of CNN to recover
clear scenes, implicitly guided by the scattering branch. Extensive experiments
are conducted on both real-world and synthesized data, demonstrating the
superiority of the proposed method over the state-of-the-art UDC restoration
techniques. The source code and dataset are available at
\url{https://github.com/NamecantbeNULL/SRUDC}.
Related papers
- MRIR: Integrating Multimodal Insights for Diffusion-based Realistic Image Restoration [17.47612023350466]
We propose MRIR, a diffusion-based restoration method with multimodal insights.
For the textual level, we harness the power of the pre-trained multimodal large language model to infer meaningful semantic information from low-quality images.
For the visual level, we mainly focus on the pixel level control. Thus, we utilize a Pixel-level Processor and ControlNet to control spatial structures.
arXiv Detail & Related papers (2024-07-04T04:55:14Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Segmentation Guided Sparse Transformer for Under-Display Camera Image
Restoration [91.65248635837145]
Under-Display Camera (UDC) is an emerging technology that achieves full-screen display via hiding the camera under the display panel.
In this paper, we observe that when using the Vision Transformer for UDC degraded image restoration, the global attention samples a large amount of redundant information and noise.
We propose a Guided Sparse Transformer method (SGSFormer) for the task of restoring high-quality images from UDC degraded images.
arXiv Detail & Related papers (2024-03-09T13:11:59Z) - CNN Injected Transformer for Image Exposure Correction [20.282217209520006]
Previous exposure correction methods based on convolutions often produce exposure deviation in images.
We propose a CNN Injected Transformer (CIT) to harness the individual strengths of CNN and Transformer simultaneously.
In addition to the hybrid architecture design for exposure correction, we apply a set of carefully formulated loss functions to improve the spatial coherence and rectify potential color deviations.
arXiv Detail & Related papers (2023-09-08T14:53:00Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - UDC-UNet: Under-Display Camera Image Restoration via U-Shape Dynamic
Network [13.406025621307132]
Under-Display Camera (UDC) has been widely exploited to help smartphones realize full screen display.
As the screen could inevitably affect the light propagation process, the images captured by the UDC system usually contain flare, haze, blur, and noise.
In this paper, we propose a new deep model, namely UDC-UNet, to address the UDC image restoration problem with the known Point Spread Function (PSF) in HDR scenes.
arXiv Detail & Related papers (2022-09-05T07:41:44Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.