Learning to See Through with Events
- URL: http://arxiv.org/abs/2212.02219v1
- Date: Mon, 5 Dec 2022 12:51:22 GMT
- Title: Learning to See Through with Events
- Authors: Lei Yu, Xiang Zhang, Wei Liao, Wen Yang, Gui-Song Xia
- Abstract summary: This paper presents an Event-based SAI (E-SAI) method by relying on asynchronous events with extremely low latency and high dynamic range.
The collected events are first refocused by a Re-focus-Net module to align in-focus events while scattering out off-focus ones.
A hybrid network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode the foreground-temporal information from the refocused events and reconstruct a visual image of the occluded scenes.
- Score: 37.19232535463858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although synthetic aperture imaging (SAI) can achieve the seeing-through
effect by blurring out off-focus foreground occlusions while recovering
in-focus occluded scenes from multi-view images, its performance is often
deteriorated by dense occlusions and extreme lighting conditions. To address
the problem, this paper presents an Event-based SAI (E-SAI) method by relying
on the asynchronous events with extremely low latency and high dynamic range
acquired by an event camera. Specifically, the collected events are first
refocused by a Refocus-Net module to align in-focus events while scattering out
off-focus ones. Following that, a hybrid network composed of spiking neural
networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode
the spatio-temporal information from the refocused events and reconstruct a
visual image of the occluded targets. Extensive experiments demonstrate that
our proposed E-SAI method can achieve remarkable performance in dealing with
very dense occlusions and extreme lighting conditions and produce high-quality
images from pure events. Codes and datasets are available at
https://dvs-whu.cn/projects/esai/.
Related papers
- An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion [36.64856578682197]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
arXiv Detail & Related papers (2024-01-06T08:09:54Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Event-based Synthetic Aperture Imaging with a Hybrid Network [30.178111153441666]
We propose a novel SAI system based on the event camera which can produce asynchronous events with extremely low latency and high dynamic range.
To reconstruct the occluded targets, we propose a hybrid encoder-decoder network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-03-03T12:56:55Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - Event Enhanced High-Quality Image Recovery [34.46486617222021]
We propose an explainable network, an event-enhanced sparse learning network (eSL-Net) to recover the high-quality images from event cameras.
After training with a synthetic dataset, the proposed eSL-Net can largely improve the performance of the state-of-the-art by 7-12 dB.
arXiv Detail & Related papers (2020-07-16T13:51:15Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z) - Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient
Hybrid Neural Networks [40.44712305614071]
We present Spike-FlowNet, a deep hybrid neural network architecture integrating SNNs and ANNs for efficiently estimating optical flow from sparse event camera outputs.
The network is end-to-end trained with self-supervised learning on Multi-Vehicle Stereo Event Camera (MVSEC) dataset.
arXiv Detail & Related papers (2020-03-14T20:37:21Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.