Seeing Behind Dynamic Occlusions with Event Cameras
- URL: http://arxiv.org/abs/2307.15829v2
- Date: Tue, 1 Aug 2023 16:18:59 GMT
- Title: Seeing Behind Dynamic Occlusions with Event Cameras
- Authors: Rong Zou, Manasi Muglikar, Nico Messikommer, Davide Scaramuzza
- Abstract summary: We propose a novel approach to reconstruct the background from a single viewpoint.
Our solution relies for the first time on the combination of a traditional camera with an event camera.
We show that our method outperforms image inpainting methods by 3dB in terms of PSNR on our dataset.
- Score: 44.63007080623054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unwanted camera occlusions, such as debris, dust, rain-drops, and snow, can
severely degrade the performance of computer-vision systems. Dynamic occlusions
are particularly challenging because of the continuously changing pattern.
Existing occlusion-removal methods currently use synthetic aperture imaging or
image inpainting. However, they face issues with dynamic occlusions as these
require multiple viewpoints or user-generated masks to hallucinate the
background intensity. We propose a novel approach to reconstruct the background
from a single viewpoint in the presence of dynamic occlusions. Our solution
relies for the first time on the combination of a traditional camera with an
event camera. When an occlusion moves across a background image, it causes
intensity changes that trigger events. These events provide additional
information on the relative intensity changes between foreground and background
at a high temporal resolution, enabling a truer reconstruction of the
background content. We present the first large-scale dataset consisting of
synchronized images and event sequences to evaluate our approach. We show that
our method outperforms image inpainting methods by 3dB in terms of PSNR on our
dataset.
Related papers
- Temporal-Mapping Photography for Event Cameras [5.344756442054121]
Event cameras, or Dynamic Vision Sensors (DVS), capture brightness changes as a continuous stream of "events"
Converting sparse events to dense intensity frames faithfully has long been an ill-posed problem.
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Image Reconstruction from Events. Why learn it? [11.773972029187433]
We show how tackling the joint problem of motion estimation leads us to model event-based image reconstruction as a linear inverse problem.
We propose classical and learning-based image priors can be used to solve the problem and remove artifacts from the reconstructed images.
arXiv Detail & Related papers (2021-12-12T14:01:09Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.