Neural Image Re-Exposure
- URL: http://arxiv.org/abs/2305.13593v1
- Date: Tue, 23 May 2023 01:55:37 GMT
- Title: Neural Image Re-Exposure
- Authors: Xinyu Zhang, Hefei Huang, Xu Jia, Dong Wang, Huchuan Lu
- Abstract summary: An improper shutter may lead to a blurry image, video discontinuity, or rolling shutter artifact.
We propose a neural network-based image re-exposure framework.
It consists of an encoder for visual latent space construction, a re-exposure module for aggregating information to neural film with a desired shutter strategy, and a decoder for 'developing' neural film into a desired image.
- Score: 86.42475408644822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The shutter strategy applied to the photo-shooting process has a significant
influence on the quality of the captured photograph. An improper shutter may
lead to a blurry image, video discontinuity, or rolling shutter artifact.
Existing works try to provide an independent solution for each issue. In this
work, we aim to re-expose the captured photo in post-processing to provide a
more flexible way of addressing those issues within a unified framework.
Specifically, we propose a neural network-based image re-exposure framework. It
consists of an encoder for visual latent space construction, a re-exposure
module for aggregating information to neural film with a desired shutter
strategy, and a decoder for 'developing' neural film into a desired image. To
compensate for information confusion and missing frames, event streams, which
can capture almost continuous brightness changes, are leveraged in computing
visual latent content. Both self-attention layers and cross-attention layers
are employed in the re-exposure module to promote interaction between neural
film and visual latent content and information aggregation to neural film. The
proposed unified image re-exposure framework is evaluated on several
shutter-related image recovery tasks and performs favorably against independent
state-of-the-art methods.
Related papers
- Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos [34.152901518593396]
The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing.
However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms.
We propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos.
arXiv Detail & Related papers (2023-11-22T03:41:13Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Inverting the Imaging Process by Learning an Implicit Camera Model [73.81635386829846]
This paper proposes a novel implicit camera model which represents the physical imaging process of a camera as a deep neural network.
We demonstrate the power of this new implicit camera model on two inverse imaging tasks.
arXiv Detail & Related papers (2023-04-25T11:55:03Z) - Spatio-Temporal Outdoor Lighting Aggregation on Image Sequences using
Transformer Networks [23.6427456783115]
In this work, we focus on outdoor lighting estimation by aggregating individual noisy estimates from images.
Recent work based on deep neural networks has shown promising results for single image lighting estimation, but suffers from robustness.
We tackle this problem by combining lighting estimates from several image views sampled in the angular and temporal domain of an image sequence.
arXiv Detail & Related papers (2022-02-18T14:11:16Z) - Restoration of Video Frames from a Single Blurred Image with Motion
Understanding [69.90724075337194]
We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
arXiv Detail & Related papers (2021-04-19T08:32:57Z) - A New Dimension in Testimony: Relighting Video with Reflectance Field
Exemplars [1.069384486725302]
We present a learning-based method for estimating 4D reflectance field of a person given video footage illuminated under a flat-lit environment of the same subject.
We estimate the lighting environment of the input video footage and use the subject's reflectance field to create synthetic images of the subject illuminated by the input lighting environment.
We evaluate our method on the video footage of the real Holocaust survivors and show that our method outperforms the state-of-the-art methods in both realism and speed.
arXiv Detail & Related papers (2021-04-06T20:29:06Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.