Learning to See Through Dazzle
- URL: http://arxiv.org/abs/2402.15919v2
- Date: Mon, 4 Mar 2024 22:42:28 GMT
- Title: Learning to See Through Dazzle
- Authors: Xiaopeng Peng, Erin F. Fleet, Abbie T. Watnik, Grover A. Swartzlander
- Abstract summary: Machine vision is susceptible to laser dazzle, where intense laser light can blind and distort its perception of the environment through oversaturation or permanent damage to sensor pixels.
Here we employ a wavefront-coded phase mask to diffuse the energy of laser light and introduce a sandwich generative adversarial network (SGAN) to restore images from complex image degradations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine vision is susceptible to laser dazzle, where intense laser light can
blind and distort its perception of the environment through oversaturation or
permanent damage to sensor pixels. Here we employ a wavefront-coded phase mask
to diffuse the energy of laser light and introduce a sandwich generative
adversarial network (SGAN) to restore images from complex image degradations,
such as varying laser-induced image saturation, mask-induced image blurring,
unknown lighting conditions, and various noise corruptions. The SGAN
architecture combines discriminative and generative methods by wrapping two
GANs around a learnable image deconvolution module. In addition, we make use of
Fourier feature representations to reduce the spectral bias of neural networks
and improve its learning of high-frequency image details. End-to-end training
includes the realistic physics-based synthesis of a large set of training data
from publicly available images. We trained the SGAN to suppress the peak laser
irradiance as high as $10^6$ times the sensor saturation threshold - the point
at which camera sensors may experience damage without the mask. The trained
model was evaluated on both a synthetic data set and data collected from the
laboratory. The proposed image restoration model quantitatively and
qualitatively outperforms state-of-the-art methods for a wide range of scene
contents, laser powers, incident laser angles, ambient illumination strengths,
and noise characteristics.
Related papers
- CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Classification robustness to common optical aberrations [64.08840063305313]
This paper proposes OpticsBench, a benchmark for investigating robustness to realistic, practically relevant optical blur effects.
Experiments on ImageNet show that for a variety of different pre-trained DNNs, the performance varies strongly compared to disk-shaped kernels.
We show on ImageNet-100 with OpticsAugment that can be increased by using optical kernels as data augmentation.
arXiv Detail & Related papers (2023-08-29T08:36:00Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Lightweight HDR Camera ISP for Robust Perception in Dynamic Illumination
Conditions via Fourier Adversarial Networks [35.532434169432776]
We propose a lightweight two-stage image enhancement algorithm sequentially balancing illumination and noise removal.
We also propose a Fourier spectrum-based adversarial framework (AFNet) for consistent image enhancement under varying illumination conditions.
Based on quantitative and qualitative evaluations, we also examine the practicality and effects of image enhancement techniques on the performance of common perception tasks.
arXiv Detail & Related papers (2022-04-04T18:48:51Z) - DA-DRN: Degradation-Aware Deep Retinex Network for Low-Light Image
Enhancement [14.75902042351609]
We propose a Degradation-Aware Deep Retinex Network (denoted as DA-DRN) for low-light image enhancement and tackle the above degradation.
Based on Retinex Theory, the decomposition net in our model can decompose low-light images into reflectance and illumination maps.
We conduct extensive experiments to demonstrate that our approach achieves a promising effect with good rubustness and generalization.
arXiv Detail & Related papers (2021-10-05T03:53:52Z) - Thermal Image Processing via Physics-Inspired Deep Networks [21.094006629684376]
DeepIR combines physically accurate sensor modeling with deep network-based image representation.
DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target.
Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images.
arXiv Detail & Related papers (2021-08-18T04:57:48Z) - Light Lies: Optical Adversarial Attack [24.831391763610046]
This paper introduces an optical adversarial attack, which physically alters the light field information arriving at the image sensor so that the classification model yields misclassification.
We present experiments based on both simulation and a real hardware optical system, from which the feasibility of the proposed optical attack is demonstrated.
arXiv Detail & Related papers (2021-06-18T04:20:49Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - FD-GAN: Generative Adversarial Networks with Fusion-discriminator for
Single Image Dehazing [48.65974971543703]
We propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing.
Our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts.
Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images.
arXiv Detail & Related papers (2020-01-20T04:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.