Frequency Cam: Imaging Periodic Signals in Real-Time
- URL: http://arxiv.org/abs/2211.00198v1
- Date: Tue, 1 Nov 2022 00:08:35 GMT
- Title: Frequency Cam: Imaging Periodic Signals in Real-Time
- Authors: Bernd Pfrommer
- Abstract summary: We present an efficient and fully asynchronous event camera algorithm for detecting the fundamental frequency at which image pixels flicker.
We discuss the important design parameters for fullsensor frequency imaging.
We present Frequency Cam, an open-source implementation as a ROS node that can run on a single core of a laptop CPU at more than 50 million events per second.
- Score: 1.774900701865444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to their high temporal resolution and large dynamic range event cameras
are uniquely suited for the analysis of time-periodic signals in an image. In
this work we present an efficient and fully asynchronous event camera algorithm
for detecting the fundamental frequency at which image pixels flicker. The
algorithm employs a second-order digital infinite impulse response (IIR) filter
to perform an approximate per-pixel brightness reconstruction and is more
robust to high-frequency noise than the baseline method we compare to. We
further demonstrate that using the falling edge of the signal leads to more
accurate period estimates than the rising edge, and that for certain signals
interpolating the zero-level crossings can further increase accuracy. Our
experiments find that the outstanding capabilities of the camera in detecting
frequencies up to 64kHz for a single pixel do not carry over to full sensor
imaging as readout bandwidth limitations become a serious obstacle. This
suggests that a hardware implementation closer to the sensor will allow for
greatly improved frequency imaging. We discuss the important design parameters
for fullsensor frequency imaging and present Frequency Cam, an open-source
implementation as a ROS node that can run on a single core of a laptop CPU at
more than 50 million events per second. It produces results that are
qualitatively very similar to those obtained from the closed source vibration
analysis module in Prophesee's Metavision Toolkit. The code for Frequency Cam
and a demonstration video can be found at
https://github.com/berndpfrommer/frequency_cam
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Faster Region-Based CNN Spectrum Sensing and Signal Identification in
Cluttered RF Environments [0.7734726150561088]
We optimize a faster region-based convolutional neural network (FRCNN) for 1-dimensional (1D) signal processing and electromagnetic spectrum sensing.
Results show that our method has better localization performance, and is faster than the 2D equivalent.
arXiv Detail & Related papers (2023-02-20T09:35:13Z) - Spatial-Temporal Frequency Forgery Clue for Video Forgery Detection in
VIS and NIR Scenario [87.72258480670627]
Existing face forgery detection methods based on frequency domain find that the GAN forged images have obvious grid-like visual artifacts in the frequency spectrum compared to the real images.
This paper proposes a Cosine Transform-based Forgery Clue Augmentation Network (FCAN-DCT) to achieve a more comprehensive spatial-temporal feature representation.
arXiv Detail & Related papers (2022-07-05T09:27:53Z) - Toward Efficient Hyperspectral Image Processing inside Camera Pixels [1.6449390849183356]
Hyperspectral cameras generate a large amount of data due to the presence of hundreds of spectral bands.
To mitigate this problem, we propose a form of processing-in-pixel (PIP)
Our PIP-optimized custom CNN layers effectively compress the input data, significantly reducing the bandwidth required to transmit the data downstream to the HSI processing unit.
arXiv Detail & Related papers (2022-03-11T01:06:02Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Wavelet-Based Network For High Dynamic Range Imaging [64.66969585951207]
Existing methods, such as optical flow based and end-to-end deep learning based solutions, are error-prone either in detail restoration or ghosting artifacts removal.
In this work, we propose a novel frequency-guided end-to-end deep neural network (FNet) to conduct HDR fusion in the frequency domain, and Wavelet Transform (DWT) is used to decompose inputs into different frequency bands.
The low-frequency signals are used to avoid specific ghosting artifacts, while the high-frequency signals are used for preserving details.
arXiv Detail & Related papers (2021-08-03T12:26:33Z) - SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera [9.314068908300285]
Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
arXiv Detail & Related papers (2021-05-02T14:06:28Z) - Asynchronous Corner Tracking Algorithm based on Lifetime of Events for
DAVIS Cameras [0.9988653233188148]
Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones, capture the intensity changes in the scene and generates a stream of events in an asynchronous fashion.
The output rate of such cameras can reach up to 10 million events per second in high dynamic environments.
A novel asynchronous corner tracking method is proposed that uses both events and intensity images captured by a DAVIS camera.
arXiv Detail & Related papers (2020-10-29T12:02:40Z) - A Modified Fourier-Mellin Approach for Source Device Identification on
Stabilized Videos [72.40789387139063]
multimedia forensic tools usually exploit characteristic noise traces left by the camera sensor on the acquired frames.
This analysis requires that the noise pattern characterizing the camera and the noise pattern extracted from video frames under analysis are geometrically aligned.
We propose to overcome this limitation by searching scaling and rotation parameters in the frequency domain.
arXiv Detail & Related papers (2020-05-20T12:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.