Event-based Non-Rigid Reconstruction from Contours
- URL: http://arxiv.org/abs/2210.06270v1
- Date: Wed, 12 Oct 2022 14:53:11 GMT
- Title: Event-based Non-Rigid Reconstruction from Contours
- Authors: Yuxuan Xue, Haolong Li, Stefan Leutenegger, J\"org St\"uckler
- Abstract summary: We propose a novel approach for reconstructing such deformations using measurements from event-based cameras.
Under the assumption of a static background, where all events are generated by the motion, our approach estimates the deformation of objects from events generated at the object contour.
It associates events to mesh faces on the contour and maximizes the alignment of the line of sight through the event pixel with the associated face.
- Score: 17.049602518532847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual reconstruction of fast non-rigid object deformations over time is a
challenge for conventional frame-based cameras. In this paper, we propose a
novel approach for reconstructing such deformations using measurements from
event-based cameras. Under the assumption of a static background, where all
events are generated by the motion, our approach estimates the deformation of
objects from events generated at the object contour in a probabilistic
optimization framework. It associates events to mesh faces on the contour and
maximizes the alignment of the line of sight through the event pixel with the
associated face. In experiments on synthetic and real data, we demonstrate the
advantages of our method over state-of-the-art optimization and learning-based
approaches for reconstructing the motion of human hands. A video of the
experiments is available at https://youtu.be/gzfw7i5OKjg
Related papers
- ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras [33.81592783496106]
Event-based visual odometry aims at solving tracking and mapping sub-problems in parallel.
We build an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry.
arXiv Detail & Related papers (2024-10-12T05:35:27Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - DeFMO: Deblurring and Shape Recovery of Fast Moving Objects [139.67524021201103]
generative model embeds an image of the blurred object into a latent space representation, disentangles the background, and renders the sharp appearance.
DeFMO outperforms the state of the art and generates high-quality temporal super-resolution frames.
arXiv Detail & Related papers (2020-12-01T16:02:04Z) - Learning Event-Based Motion Deblurring [39.16921854492941]
Fast motion can be captured as events at high time rate for event-based cameras.
We show how its optimization can be unfolded with a novel end-to-end deep architecture.
The proposed approach achieves state-of-the-art reconstruction quality, and generalizes better to handling real-world motion blur.
arXiv Detail & Related papers (2020-04-13T07:01:06Z) - Future Video Synthesis with Object Motion Prediction [54.31508711871764]
Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics.
The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects.
Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy.
arXiv Detail & Related papers (2020-04-01T16:09:54Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.