Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy
- URL: http://arxiv.org/abs/2009.08283v2
- Date: Mon, 12 Apr 2021 15:19:15 GMT
- Title: Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy
- Authors: F. Paredes-Vall\'es, G. C. H. E. de Croon
- Abstract summary: Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are novel vision sensors that sample, in an asynchronous
fashion, brightness increments with low latency and high temporal resolution.
The resulting streams of events are of high value by themselves, especially for
high speed motion estimation. However, a growing body of work has also focused
on the reconstruction of intensity frames from the events, as this allows
bridging the gap with the existing literature on appearance- and frame-based
computer vision. Recent work has mostly approached this problem using neural
networks trained with synthetic, ground-truth data. In this work we approach,
for the first time, the intensity reconstruction problem from a self-supervised
learning perspective. Our method, which leverages the knowledge of the inner
workings of event cameras, combines estimated optical flow and the event-based
photometric constancy to train neural networks without the need for any
ground-truth or synthetic data. Results across multiple datasets show that the
performance of the proposed self-supervised approach is in line with the
state-of-the-art. Additionally, we propose a novel, lightweight neural network
for optical flow estimation that achieves high speed inference with only a
minor drop in performance.
Related papers
- A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation [3.355813093377501]
Event cameras operate differently from traditional digital cameras, continuously capturing data and generating binary spikes that encode time, location, and light intensity.
This necessitates the development of innovative, spike-aware algorithms tailored for event cameras.
We propose a purely spike-driven spike transformer network for depth estimation from spiking camera data.
arXiv Detail & Related papers (2024-04-26T11:32:53Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Neuromorphic Optical Flow and Real-time Implementation with Event
Cameras [47.11134388304464]
We build on the latest developments in event-based vision and spiking neural networks.
We propose a new network architecture that improves the state-of-the-art self-supervised optical flow accuracy.
We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity.
arXiv Detail & Related papers (2023-04-14T14:03:35Z) - Taming Contrast Maximization for Learning Sequential, Low-latency,
Event-based Optical Flow [18.335337530059867]
Event cameras have gained significant traction since they open up new avenues for low-latency and low-power solutions to complex computer vision problems.
To unlock these solutions, it is necessary to develop algorithms that can leverage the unique nature of event data.
In this work, we propose a novel self-supervised learning pipeline for the estimation of event-based optical flow.
arXiv Detail & Related papers (2023-03-09T12:37:33Z) - Physics to the Rescue: Deep Non-line-of-sight Reconstruction for
High-speed Imaging [13.271762773872476]
We present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction.
Our method outperforms prior physics and learning based approaches on both synthetic and real measurements.
arXiv Detail & Related papers (2022-05-03T02:47:02Z) - Spatio-Temporal Recurrent Networks for Event-Based Optical Flow
Estimation [47.984368369734995]
We introduce a novel recurrent encoding-decoding neural network architecture for event-based optical flow estimation.
The network is end-to-end trained with self-supervised learning on the Multi-Vehicle Stereo Event Camera dataset.
We have shown that it outperforms all the existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-09-10T13:37:37Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.