Passive Non-line-of-sight Imaging for Moving Targets with an Event
Camera
- URL: http://arxiv.org/abs/2209.13300v1
- Date: Tue, 27 Sep 2022 10:56:14 GMT
- Title: Passive Non-line-of-sight Imaging for Moving Targets with an Event
Camera
- Authors: Conghe Wang (1), Yutong He (2), Xia Wang (1), Honghao Huang (2),
Changda Yan (1), Xin Zhang (1) and Hongwei Chen (2)((1) Key Laboratory of
Photoelectronic Imaging Technology and System of Ministry of Education of
China, School of Optics and Photonics, Beijing Institute of Technology (2)
Beijing National Research Center for Information Science and Technology
(BNRist), Department of Electronic Engineering, Tsinghua University)
- Abstract summary: Non-line-of-sight (NLOS) imaging is an emerging technique for detecting objects behind obstacles or around corners.
Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods.
We propose a novel event-based passive NLOS imaging method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-line-of-sight (NLOS) imaging is an emerging technique for detecting
objects behind obstacles or around corners. Recent studies on passive NLOS
mainly focus on steady-state measurement and reconstruction methods, which show
limitations in recognition of moving targets. To the best of our knowledge, we
propose a novel event-based passive NLOS imaging method. We acquire
asynchronous event-based data which contains detailed dynamic information of
the NLOS target, and efficiently ease the degradation of speckle caused by
movement. Besides, we create the first event-based NLOS imaging dataset,
NLOS-ES, and the event-based feature is extracted by time-surface
representation. We compare the reconstructions through event-based data with
frame-based data. The event-based method performs well on PSNR and LPIPS, which
is 20% and 10% better than frame-based method, while the data volume takes only
2% of traditional method.
Related papers
- Evaluating Image-Based Face and Eye Tracking with Event Cameras [9.677797822200965]
Event Cameras, also known as Neuromorphic sensors, capture changes in local light intensity at the pixel level, producing asynchronously generated data termed events''
This data format mitigates common issues observed in conventional cameras, like under-sampling when capturing fast-moving objects.
We evaluate the viability of integrating conventional algorithms with event-based data, transformed into a frame format.
arXiv Detail & Related papers (2024-08-19T20:27:08Z) - Event-assisted Low-Light Video Object Segmentation [47.28027938310957]
Event cameras offer promise in enhancing object visibility and aiding VOS methods under such low-light conditions.
This paper introduces a pioneering framework tailored for low-light VOS, leveraging event camera data to elevate segmentation accuracy.
arXiv Detail & Related papers (2024-04-02T13:41:22Z) - Neuromorphic Synergy for Video Binarization [54.195375576583864]
Bimodal objects serve as a visual form to embed information that can be easily recognized by vision systems.
Neuromorphic cameras offer new capabilities for alleviating motion blur, but it is non-trivial to first de-blur and then binarize the images in a real-time manner.
We propose an event-based binary reconstruction method that leverages the prior knowledge of the bimodal target's properties to perform inference independently in both event space and image space.
We also develop an efficient integration method to propagate this binary image to high frame rate binary video.
arXiv Detail & Related papers (2024-02-20T01:43:51Z) - Relating Events and Frames Based on Self-Supervised Learning and
Uncorrelated Conditioning for Unsupervised Domain Adaptation [23.871860648919593]
Event-based cameras provide accurate and high temporal resolution measurements for performing computer vision tasks.
Despite their advantages, utilizing deep learning for event-based vision encounters a significant obstacle due to the scarcity of annotated data.
We propose a new algorithm tailored for adapting a deep neural network trained on annotated frame-based data to generalize well on event-based unannotated data.
arXiv Detail & Related papers (2024-01-02T05:10:08Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.