On the Benefits of Visual Stabilization for Frame- and Event-based Perception
- URL: http://arxiv.org/abs/2408.15602v1
- Date: Wed, 28 Aug 2024 07:49:30 GMT
- Title: On the Benefits of Visual Stabilization for Frame- and Event-based Perception
- Authors: Juan Pablo Rodriguez-Gomez, Jose Ramiro Martinez-de Dios, Anibal Ollero, Guillermo Gallego,
- Abstract summary: This paper presents a processing-based stabilization approach to compensate the camera's rotational motion.
We evaluate the benefits of stabilization in two perception applications: feature tracking and estimating the translation component of the camera's ego-motion.
Experiments unveil that stabilization can improve feature tracking and camera ego-motion estimation accuracy in 27.37% and 34.82%, respectively.
- Score: 9.603053472399047
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Vision-based perception systems are typically exposed to large orientation changes in different robot applications. In such conditions, their performance might be compromised due to the inherent complexity of processing data captured under challenging motion. Integration of mechanical stabilizers to compensate for the camera rotation is not always possible due to the robot payload constraints. This paper presents a processing-based stabilization approach to compensate the camera's rotational motion both on events and on frames (i.e., images). Assuming that the camera's attitude is available, we evaluate the benefits of stabilization in two perception applications: feature tracking and estimating the translation component of the camera's ego-motion. The validation is performed using synthetic data and sequences from well-known event-based vision datasets. The experiments unveil that stabilization can improve feature tracking and camera ego-motion estimation accuracy in 27.37% and 34.82%, respectively. Concurrently, stabilization can reduce the processing time of computing the camera's linear velocity by at least 25%. Code is available at https://github.com/tub-rip/visual_stabilization
Related papers
- Model Optimization for Multi-Camera 3D Detection and Tracking [13.756560739163362]
Outside-in multi-camera perception is increasingly important in indoor environments.<n>We evaluate Sparse4D, a query-based 3D detection and tracking framework.<n>We study reduced input frame rates, post-training quantization, transfer to the WILDTRACK benchmark, and Transformer Engine mixedprecision fine-tuning.
arXiv Detail & Related papers (2026-01-31T01:51:30Z) - PROFusion: Robust and Accurate Dense Reconstruction via Camera Pose Regression and Optimization [21.23419310544054]
Real-time dense scene reconstruction is crucial for robotics.<n>Current RGB-D SLAM systems fail when cameras experience large viewpoint changes, fast motions, or sudden shaking.
arXiv Detail & Related papers (2025-09-29T03:20:49Z) - Human-Robot Navigation using Event-based Cameras and Reinforcement Learning [1.7614751781649955]
This work introduces a robot navigation controller that combines event cameras and other sensors with reinforcement learning to enable real-time human-centered navigation and obstacle avoidance.<n>Unlike conventional image-based controllers, which operate at fixed rates and suffer from motion blur and latency, this approach leverages the asynchronous nature of event cameras to process visual information over flexible time intervals.
arXiv Detail & Related papers (2025-06-12T15:03:08Z) - CamI2V: Camera-Controlled Image-to-Video Diffusion Model [11.762824216082508]
In this paper, we emphasize the necessity of integrating explicit physical constraints into model design.
Epipolar attention is proposed for modeling all cross-frame relationships from a novel perspective of noised condition.
We achieve a 25.5% improvement in camera controllability on RealEstate10K while maintaining strong generalization to out-of-domain images.
arXiv Detail & Related papers (2024-10-21T12:36:27Z) - VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - GPU-accelerated SIFT-aided source identification of stabilized videos [63.084540168532065]
We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
arXiv Detail & Related papers (2022-07-29T07:01:31Z) - PUCK: Parallel Surface and Convolution-kernel Tracking for Event-Based
Cameras [4.110120522045467]
Event-cameras can guarantee fast visual sensing in dynamic environments, but require a tracking algorithm that can keep up with the high data rate induced by the robot ego-motion.
We introduce a novel tracking method that leverages the Exponential Reduced Ordinal Surface (EROS) data representation to decouple event-by-event processing and tracking.
We propose the task of tracking the air hockey puck sliding on a surface, with the future aim of controlling the iCub robot to reach the target precisely and on time.
arXiv Detail & Related papers (2022-05-16T13:23:52Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Self-Supervised Real-time Video Stabilization [100.00816752529045]
We propose a novel method of real-time video stabilization.
It transforms a shaky video to a stabilized video as if it were stabilized via gimbals in real-time.
arXiv Detail & Related papers (2021-11-10T22:49:56Z) - Out-of-boundary View Synthesis Towards Full-Frame Video Stabilization [82.56853587380168]
Warping-based video stabilizers smooth camera trajectory by constraining each pixel's displacement and warp frames from unstable ones.
OVS can be integrated into existing warping-based stabilizers as a plug-and-play module to significantly improve the cropping ratio of the stabilized results.
arXiv Detail & Related papers (2021-08-20T08:07:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.