Density Invariant Contrast Maximization for Neuromorphic Earth
Observations
- URL: http://arxiv.org/abs/2304.14125v2
- Date: Wed, 3 May 2023 12:33:40 GMT
- Title: Density Invariant Contrast Maximization for Neuromorphic Earth
Observations
- Authors: Sami Arja, Alexandre Marcireau, Richard L. Balthazor, Matthew G.
McHarg, Saeed Afshar and Gregory Cohen
- Abstract summary: Contrast (CMax) techniques are widely used in event-based vision systems to estimate the motion parameters of the camera and generate high-contrast images.
These techniques are noise-intolerance and suffer from the multiple extrema problem which arises when the scene contains more noisy events than structure.
Our proposed solution overcomes the multiple extrema and noise-intolerance problems by correcting the warped event before calculating the contrast.
- Score: 55.970609838687864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrast maximization (CMax) techniques are widely used in event-based vision
systems to estimate the motion parameters of the camera and generate
high-contrast images. However, these techniques are noise-intolerance and
suffer from the multiple extrema problem which arises when the scene contains
more noisy events than structure, causing the contrast to be higher at multiple
locations. This makes the task of estimating the camera motion extremely
challenging, which is a problem for neuromorphic earth observation, because,
without a proper estimation of the motion parameters, it is not possible to
generate a map with high contrast, causing important details to be lost.
Similar methods that use CMax addressed this problem by changing or augmenting
the objective function to enable it to converge to the correct motion
parameters. Our proposed solution overcomes the multiple extrema and
noise-intolerance problems by correcting the warped event before calculating
the contrast and offers the following advantages: it does not depend on the
event data, it does not require a prior about the camera motion, and keeps the
rest of the CMax pipeline unchanged. This is to ensure that the contrast is
only high around the correct motion parameters. Our approach enables the
creation of better motion-compensated maps through an analytical compensation
technique using a novel dataset from the International Space Station (ISS).
Code is available at \url{https://github.com/neuromorphicsystems/event_warping}
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - CamP: Camera Preconditioning for Neural Radiance Fields [56.46526219931002]
NeRFs can be optimized to obtain high-fidelity 3D scene reconstructions of objects and large-scale scenes.
Extrinsic and intrinsic camera parameters are usually estimated using Structure-from-Motion (SfM) methods as a pre-processing step to NeRF.
We propose using a proxy problem to compute a whitening transform that eliminates the correlation between camera parameters and normalizes their effects.
arXiv Detail & Related papers (2023-08-21T17:59:54Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Globally-Optimal Contrast Maximisation for Event Cameras [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in challenging illumination with high temporal resolution.
The pixels of an event camera operate independently and asynchronously.
The flow of events is modelled by a general homographic warping in a space-time volume.
arXiv Detail & Related papers (2022-06-10T14:06:46Z) - Globally-Optimal Event Camera Motion Estimation [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in HDR conditions and have high temporal resolution.
Event cameras measure asynchronous pixel-level changes and return them in a highly discretised format.
arXiv Detail & Related papers (2022-03-08T08:24:22Z) - Visual Odometry with an Event Camera Using Continuous Ray Warping and
Volumetric Contrast Maximization [31.627936023222052]
We present a new solution to tracking and mapping with an event camera.
The motion of the camera contains both rotation and translation, and the displacements happen in an arbitrarily structured environment.
We introduce a new solution to this problem by performing contrast in 3D.
The practical validity of our approach is supported by an application to AGV motion estimation and 3D reconstruction with a single vehicle-mounted event camera.
arXiv Detail & Related papers (2021-07-07T04:32:57Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.