Polarized Color Image Denoising using Pocoformer
- URL: http://arxiv.org/abs/2207.00215v1
- Date: Fri, 1 Jul 2022 05:52:14 GMT
- Title: Polarized Color Image Denoising using Pocoformer
- Authors: Zhuoxiao Li, Haiyang Jiang, Yinqiang Zheng
- Abstract summary: Polarized color photography provides both visual textures and object surficial information in one snapshot.
The use of the directional polarizing filter array causes extremely lower photon count and SNR compared to conventional color imaging.
We propose a learning-based approach to simultaneously restore clean signals and precise polarization information.
- Score: 42.171036556122644
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Polarized color photography provides both visual textures and object
surficial information in one single snapshot. However, the use of the
directional polarizing filter array causes extremely lower photon count and SNR
compared to conventional color imaging. Thus, the feature essentially leads to
unpleasant noisy images and destroys polarization analysis performance. It is a
challenge for traditional image processing pipelines owing to the fact that the
physical constraints exerted implicitly in the channels are excessively
complicated. To address this issue, we propose a learning-based approach to
simultaneously restore clean signals and precise polarization information. A
real-world polarized color image dataset of paired raw short-exposed noisy and
long-exposed reference images are captured to support the learning-based
pipeline. Moreover, we embrace the development of vision Transformer and
propose a hybrid transformer model for the Polarized Color image denoising,
namely PoCoformer, for a better restoration performance. Abundant experiments
demonstrate the effectiveness of proposed method and key factors that affect
results are analyzed.
Related papers
- A Nerf-Based Color Consistency Method for Remote Sensing Images [0.5735035463793009]
We propose a NeRF-based method of color consistency for multi-view images, which weaves image features together using implicit expressions, and then re-illuminates feature space to generate a fusion image with a new perspective.
Experimental results show that the synthesize image generated by our method has excellent visual effect and smooth color transition at the edges.
arXiv Detail & Related papers (2024-11-08T13:26:07Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.
We present a novel Mamba scanning mechanism, called RAWMamba, to effectively handle raw images with different CFAs.
We also present a Retinex Decomposition Module (RDM) grounded in Retinex prior, which decouples illumination from reflectance to facilitate more effective denoising and automatic non-linear exposure correction.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - Video Frame Interpolation for Polarization via Swin-Transformer [9.10220649654041]
Video Frame Interpolation (VFI) has been extensively explored and demonstrated, yet its application to polarization remains largely unexplored.
This study proposes a multi-stage and multi-scale network called Swin-VFI based on the Swin-Transformer.
Experimental results demonstrate our approach's superior reconstruction accuracy across all tasks.
arXiv Detail & Related papers (2024-06-17T09:48:54Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - ITRE: Low-light Image Enhancement Based on Illumination Transmission
Ratio Estimation [10.26197196078661]
Noise, artifacts, and over-exposure are significant challenges in the field of low-light image enhancement.
We propose a novel Retinex-based method, called ITRE, which suppresses noise and artifacts from the origin of the model.
Extensive experiments demonstrate the effectiveness of our approach in suppressing noise, preventing artifacts, and controlling over-exposure level simultaneously.
arXiv Detail & Related papers (2023-10-08T13:22:20Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Deep Demosaicing for Polarimetric Filter Array Cameras [7.39819574829298]
We propose a novel CNN-based model which directly demosaics the raw camera image to a per-pixel Stokes vector.
We introduce a new method, employing a consumer LCD screen, to effectively acquire real-world data for training.
arXiv Detail & Related papers (2022-11-24T17:41:50Z) - Two-Step Color-Polarization Demosaicking Network [14.5106375775521]
TCPDNet is a two-step color-polarization demosaicking network.
TCPDNet outperforms existing methods in terms of the image quality of polarization images and the accuracy of Stokes parameters.
arXiv Detail & Related papers (2022-09-13T14:28:18Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.