A Unified Framework for Compressive Video Recovery from Coded Exposure
Techniques
- URL: http://arxiv.org/abs/2011.05532v1
- Date: Wed, 11 Nov 2020 03:45:31 GMT
- Title: A Unified Framework for Compressive Video Recovery from Coded Exposure
Techniques
- Authors: Prasan Shedligeri, Anupama S, Kaushik Mitra
- Abstract summary: A Coded-2-Bucket camera has been proposed that can acquire two compressed measurements in a single exposure.
Our learning-based framework consists of a shift-variant convolutional layer followed by a fully convolutional deep neural network.
When most scene points are static, the C2B sensor has a significant advantage over acquiring a single pixel-wise coded measurement.
- Score: 18.31448635476334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several coded exposure techniques have been proposed for acquiring high frame
rate videos at low bandwidth. Most recently, a Coded-2-Bucket camera has been
proposed that can acquire two compressed measurements in a single exposure,
unlike previously proposed coded exposure techniques, which can acquire only a
single measurement. Although two measurements are better than one for an
effective video recovery, we are yet unaware of the clear advantage of two
measurements, either quantitatively or qualitatively. Here, we propose a
unified learning-based framework to make such a qualitative and quantitative
comparison between those which capture only a single coded image (Flutter
Shutter, Pixel-wise coded exposure) and those that capture two measurements per
exposure (C2B). Our learning-based framework consists of a shift-variant
convolutional layer followed by a fully convolutional deep neural network. Our
proposed unified framework achieves the state of the art reconstructions in all
three sensing techniques. Further analysis shows that when most scene points
are static, the C2B sensor has a significant advantage over acquiring a single
pixel-wise coded measurement. However, when most scene points undergo motion,
the C2B sensor has only a marginal benefit over the single pixel-wise coded
exposure measurement.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Passive Snapshot Coded Aperture Dual-Pixel RGB-D Imaging [25.851398356458425]
Single-shot 3D sensing is useful in many application areas such as microscopy, medical imaging, surgical navigation, and autonomous driving.
We propose CADS (Coded Aperture Dual-Pixel Sensing), in which we use a coded aperture in the imaging lens along with a DP sensor.
Our resulting CADS imaging system demonstrates improvement of >1.5dB PSNR in all-in-focus (AIF) estimates and 5-6% in depth estimation quality over naive DP sensing.
arXiv Detail & Related papers (2024-02-28T06:45:47Z) - Neuromorphic Synergy for Video Binarization [54.195375576583864]
Bimodal objects serve as a visual form to embed information that can be easily recognized by vision systems.
Neuromorphic cameras offer new capabilities for alleviating motion blur, but it is non-trivial to first de-blur and then binarize the images in a real-time manner.
We propose an event-based binary reconstruction method that leverages the prior knowledge of the bimodal target's properties to perform inference independently in both event space and image space.
We also develop an efficient integration method to propagate this binary image to high frame rate binary video.
arXiv Detail & Related papers (2024-02-20T01:43:51Z) - Context-Aware Video Reconstruction for Rolling Shutter Cameras [52.28710992548282]
In this paper, we propose a context-aware GS video reconstruction architecture.
We first estimate the bilateral motion field so that the pixels of the two RS frames are warped to a common GS frame.
Then, a refinement scheme is proposed to guide the GS frame synthesis along with bilateral occlusion masks to produce high-fidelity GS video frames.
arXiv Detail & Related papers (2022-05-25T17:05:47Z) - Image-free single-pixel segmentation [3.3808025405314086]
In this letter, we report an image-free single-pixel segmentation technique.
The technique combines structured illumination and single-pixel detection together, to efficiently samples and multiplexes scene's segmentation information.
We envision that this image-free segmentation technique can be widely applied in various resource-limited platforms such as UAV and unmanned vehicle.
arXiv Detail & Related papers (2021-08-24T10:06:53Z) - Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and
Pixel Exposures for Compressive Imaging Systems [56.154190098338965]
We show that our proposed time multiplexed coded aperture (TMCA) can be optimized end-to-end.
TMCA induces better coded snapshots enabling superior reconstructions in two different applications: compressive light field imaging and hyperspectral imaging.
This codification outperforms the state-of-the-art compressive imaging systems by more than 4dB in those applications.
arXiv Detail & Related papers (2021-04-06T22:42:34Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image
Pair [16.295479896947853]
Recovering video from a single motion-blurred image is a very ill-posed problem.
Traditional coded exposure framework is better-posed but it only samples a fraction of the space-time volume.
We propose to use the complementary information present in the fully-exposed image along with the coded exposure image to recover a high fidelity video.
arXiv Detail & Related papers (2020-10-20T06:08:42Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.