Sign-Coded Exposure Sensing for Noise-Robust High-Speed Imaging
- URL: http://arxiv.org/abs/2305.03226v1
- Date: Fri, 5 May 2023 01:03:37 GMT
- Title: Sign-Coded Exposure Sensing for Noise-Robust High-Speed Imaging
- Authors: R. Wes Baldwin, Vijayan Asari, Keigo Hirakawa
- Abstract summary: We present a novel optical compression of high-speed frames employing pixel-level sign-coded exposure.
Walsh functions ensure that the noise is not amplified during high-speed frame reconstruction.
Our hardware prototype demonstrated the reconstruction of 4kHz frames of a moving scene lit by ambient light only.
- Score: 16.58669052286989
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel Fourier camera, an in-hardware optical compression of
high-speed frames employing pixel-level sign-coded exposure where pixel
intensities temporally modulated as positive and negative exposure are combined
to yield Hadamard coefficients. The orthogonality of Walsh functions ensures
that the noise is not amplified during high-speed frame reconstruction, making
it a much more attractive option for coded exposure systems aimed at very high
frame rate operation. Frame reconstruction is carried out by a single-pass
demosaicking of the spatially multiplexed Walsh functions in a lattice
arrangement, significantly reducing the computational complexity. The
simulation prototype confirms the improved robustness to noise compared to the
binary-coded exposure patterns, such as one-hot encoding and pseudo-random
encoding. Our hardware prototype demonstrated the reconstruction of 4kHz frames
of a moving scene lit by ambient light only.
Related papers
- Neural-Network-Enhanced Metalens Camera for High-Definition, Dynamic Imaging in the Long-Wave Infrared Spectrum [14.057686919233646]
We develop a lightweight and cost-effective solution for the long-wave infrared imaging using a singlet.
We integrate a High-Frequency-Enhancing Cycle-GAN neural network into a metalens imaging system.
Our camera is capable of achieving dynamic imaging at 125 frames per second with an End Point Error value of 12.58.
arXiv Detail & Related papers (2024-11-26T06:09:45Z) - Event-Enhanced Snapshot Compressive Videography at 10K FPS [33.20071708537498]
Video snapshot compressive imaging (SCI) encodes the target dynamic scene compactly into a snapshot and reconstructs its high-speed frame sequence afterward.
We propose a novel hybrid "intensity+event" imaging scheme by incorporating an event camera into a video SCI setup.
We achieve high-quality videography at 0.1ms time intervals with a low-cost CMOS image sensor working at 24 FPS.
arXiv Detail & Related papers (2024-04-11T08:34:10Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - 10-mega pixel snapshot compressive imaging with a hybrid coded aperture [48.95666098332693]
High resolution images are widely used in our daily life, whereas high-speed video capture is challenging due to the low frame rate of cameras working at the high resolution mode.
snapshot imaging (SCI) was proposed as a solution to the low throughput of existing imaging systems.
arXiv Detail & Related papers (2021-06-30T01:09:24Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and
Pixel Exposures for Compressive Imaging Systems [56.154190098338965]
We show that our proposed time multiplexed coded aperture (TMCA) can be optimized end-to-end.
TMCA induces better coded snapshots enabling superior reconstructions in two different applications: compressive light field imaging and hyperspectral imaging.
This codification outperforms the state-of-the-art compressive imaging systems by more than 4dB in those applications.
arXiv Detail & Related papers (2021-04-06T22:42:34Z) - Small-brain neural networks rapidly solve inverse problems with vortex
Fourier encoders [0.0]
We introduce a vortex phase transform with a lenslet-array to accompany shallow, dense, small-brain'' neural networks for high-speed and low-light imaging.
With vortex spatial encoding, a small brain is trained to deconvolve images at rates 5-20 times faster than those achieved with random encoding schemes.
We reconstruct MNIST Fashion objects illuminated with low-light flux at a rate of several thousand frames per second on a 15 W central processing unit.
arXiv Detail & Related papers (2020-05-15T17:53:32Z) - Generalized Octave Convolutions for Learned Multi-Frequency Image
Compression [20.504561050200365]
We propose the first learned multi-frequency image compression and entropy coding approach.
It is based on the recently developed octave convolutions to factorize the latents into high and low frequency (resolution) components.
We show that the proposed generalized octave convolution can improve the performance of other auto-encoder-based computer vision tasks.
arXiv Detail & Related papers (2020-02-24T01:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.