Event-based Background-Oriented Schlieren
- URL: http://arxiv.org/abs/2311.00434v1
- Date: Wed, 1 Nov 2023 10:57:20 GMT
- Title: Event-based Background-Oriented Schlieren
- Authors: Shintaro Shiba, Friedhelm Hamann, Yoshimitsu Aoki, Guillermo Gallego
- Abstract summary: Schlieren imaging is an optical technique to observe the flow of transparent media, such as air or water, without any particle seeding.
Event cameras offer potential advantages (high dynamic range, high temporal resolution, and data efficiency) to overcome such limitations due to their bio-inspired sensing principle.
This paper presents a novel technique for perceiving air convection using events and frames by providing the first theoretical analysis that connects event data and schlieren.
- Score: 18.2247510082534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Schlieren imaging is an optical technique to observe the flow of transparent
media, such as air or water, without any particle seeding. However,
conventional frame-based techniques require both high spatial and temporal
resolution cameras, which impose bright illumination and expensive computation
limitations. Event cameras offer potential advantages (high dynamic range, high
temporal resolution, and data efficiency) to overcome such limitations due to
their bio-inspired sensing principle. This paper presents a novel technique for
perceiving air convection using events and frames by providing the first
theoretical analysis that connects event data and schlieren. We formulate the
problem as a variational optimization one combining the linearized event
generation model with a physically-motivated parameterization that estimates
the temporal derivative of the air density. The experiments with accurately
aligned frame- and event camera data reveal that the proposed method enables
event cameras to obtain on par results with existing frame-based optical flow
techniques. Moreover, the proposed method works under dark conditions where
frame-based schlieren fails, and also enables slow-motion analysis by
leveraging the event camera's advantages. Our work pioneers and opens a new
stack of event camera applications, as we publish the source code as well as
the first schlieren dataset with high-quality frame and event data.
https://github.com/tub-rip/event_based_bos
Related papers
- EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Revisiting Event-based Video Frame Interpolation [49.27404719898305]
Dynamic vision sensors or event cameras provide rich complementary information for video frame.
estimating optical flow from events is arguably more difficult than from RGB information.
We propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages.
arXiv Detail & Related papers (2023-07-24T06:51:07Z) - Fusing Frame and Event Vision for High-speed Optical Flow for Edge
Application [2.048335092363435]
Event cameras provide continuous asynchronous event streams overcoming the frame-rate limitation.
We fuse the complementary accuracy and speed advantages of the frame and event-based pipelines to provide high-speed optical flow.
arXiv Detail & Related papers (2022-07-21T19:15:05Z) - Globally-Optimal Event Camera Motion Estimation [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in HDR conditions and have high temporal resolution.
Event cameras measure asynchronous pixel-level changes and return them in a highly discretised format.
arXiv Detail & Related papers (2022-03-08T08:24:22Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.