Real-Time Under-Display Cameras Image Restoration and HDR on Mobile
Devices
- URL: http://arxiv.org/abs/2211.14040v1
- Date: Fri, 25 Nov 2022 11:46:57 GMT
- Title: Real-Time Under-Display Cameras Image Restoration and HDR on Mobile
Devices
- Authors: Marcos V. Conde and Florin Vasluianu and Sabari Nathan and Radu
Timofte
- Abstract summary: The images captured by under-display cameras (UDCs) are degraded by the screen in front of them.
Deep learning methods for image restoration can significantly reduce the degradation of captured images.
We propose a lightweight model for blind UDC Image Restoration and HDR, and we also provide a benchmark comparing the performance and runtime of different methods on smartphones.
- Score: 81.61356052916855
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The new trend of full-screen devices implies positioning the camera behind
the screen to bring a larger display-to-body ratio, enhance eye contact, and
provide a notch-free viewing experience on smartphones, TV or tablets. On the
other hand, the images captured by under-display cameras (UDCs) are degraded by
the screen in front of them. Deep learning methods for image restoration can
significantly reduce the degradation of captured images, providing satisfying
results for the human eyes. However, most proposed solutions are unreliable or
efficient enough to be used in real-time on mobile devices.
In this paper, we aim to solve this image restoration problem using efficient
deep learning methods capable of processing FHD images in real-time on
commercial smartphones while providing high-quality results. We propose a
lightweight model for blind UDC Image Restoration and HDR, and we also provide
a benchmark comparing the performance and runtime of different methods on
smartphones. Our models are competitive on UDC benchmarks while using x4 less
operations than others. To the best of our knowledge, we are the first work to
approach and analyze this real-world single image restoration problem from the
efficiency and production point of view.
Related papers
- MobileMEF: Fast and Efficient Method for Multi-Exposure Fusion [0.6261722394141346]
We propose a new method for multi-exposure fusion based on an encoder-decoder deep learning architecture.
Our model is capable of processing 4K resolution images in less than 2 seconds on mid-range smartphones.
arXiv Detail & Related papers (2024-08-15T05:03:14Z) - Self-Supervised Learning for Real-World Super-Resolution from Dual and Multiple Zoomed Observations [61.448005005426666]
We consider two challenging issues in reference-based super-resolution (RefSR) for smartphone.
We propose a novel self-supervised learning approach for real-world RefSR from observations at dual and multiple camera zooms.
arXiv Detail & Related papers (2024-05-03T15:20:30Z) - Empowering Visually Impaired Individuals: A Novel Use of Apple Live
Photos and Android Motion Photos [3.66237529322911]
We advocate for the use of Apple Live Photos and Android Motion Photos technologies.
Our findings reveal that both Live Photos and Motion Photos outperform single-frame images in common visual assisting tasks.
arXiv Detail & Related papers (2023-09-14T20:46:35Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - Towards Efficient and Scale-Robust Ultra-High-Definition Image
Demoireing [71.62289021118983]
We present an efficient baseline model ESDNet for tackling 4K moire images, wherein we build a semantic-aligned scale-aware module to address the scale variation of moire patterns.
Our approach outperforms state-of-the-art methods by a large margin while being much more lightweight.
arXiv Detail & Related papers (2022-07-20T14:20:52Z) - Dual-reference Training Data Acquisition and CNN Construction for Image
Super-Resolution [33.388234549922025]
We propose a novel method to capture a large set of realistic LR$sim$HR image pairs using real cameras.
Our innovation is to shoot images displayed on an ultra-high quality screen at different resolutions.
Experimental results show that training a super-resolution CNN by our LR$sim$HR dataset has superior restoration performance than training it by existing datasets on real world images.
arXiv Detail & Related papers (2021-08-05T03:31:50Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z) - Image Restoration for Under-Display Camera [14.209602483950322]
The new trend of full-screen devices encourages us to position a camera behind a screen.
Removing the bezel and centralizing the camera under the screen brings larger display-to-body ratio and enhances eye contact in video chat, but also causes image degradation.
In this paper, we focus on a newly-defined Under-Display Camera (UDC), as a novel real-world single image restoration problem.
arXiv Detail & Related papers (2020-03-10T17:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.