Time-Efficient Light-Field Acquisition Using Coded Aperture and Events
- URL: http://arxiv.org/abs/2403.07244v1
- Date: Tue, 12 Mar 2024 02:04:17 GMT
- Title: Time-Efficient Light-Field Acquisition Using Coded Aperture and Events
- Authors: Shuji Habuchi, Keita Takahashi, Chihiro Tsutake, Toshiaki Fujii,
Hajime Nagahara
- Abstract summary: Our method applies a sequence of coding patterns during a single exposure for an image frame.
The parallax information, which is related to the differences in coding patterns, is recorded as events.
The image frame and events, all of which are measured in a single exposure, are jointly used to computationally reconstruct a light field.
- Score: 16.130950260664285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a computational imaging method for time-efficient light-field
acquisition that combines a coded aperture with an event-based camera.
Different from the conventional coded-aperture imaging method, our method
applies a sequence of coding patterns during a single exposure for an image
frame. The parallax information, which is related to the differences in coding
patterns, is recorded as events. The image frame and events, all of which are
measured in a single exposure, are jointly used to computationally reconstruct
a light field. We also designed an algorithm pipeline for our method that is
end-to-end trainable on the basis of deep optics and compatible with real
camera hardware. We experimentally showed that our method can achieve more
accurate reconstruction than several other imaging methods with a single
exposure. We also developed a hardware prototype with the potential to complete
the measurement on the camera within 22 msec and demonstrated that light fields
from real 3-D scenes can be obtained with convincing visual quality. Our
software and supplementary video are available from our project website.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Reconstructing Continuous Light Field From Single Coded Image [7.937367109582907]
We propose a method for reconstructing a continuous light field of a target scene from a single observed image.
Joint aperture-exposure coding implemented in a camera enables effective embedding of 3-D scene information into an observed image.
NeRF-based neural rendering enables high quality view synthesis of a 3-D scene from continuous viewpoints.
arXiv Detail & Related papers (2023-11-16T07:59:01Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused
Events Fusion [14.15744053080529]
Event cameras are bio-inspired sensors that offer advantages over traditional cameras.
We tackle the problem of event-based stereo 3D reconstruction for SLAM.
We develop fusion theory and apply it to design multi-camera 3D reconstruction algorithms.
arXiv Detail & Related papers (2022-07-21T14:19:39Z) - Acquiring a Dynamic Light Field through a Single-Shot Coded Image [12.615509935080434]
We propose a method for compressively acquiring a dynamic light field (a 5-D volume) through a single-shot coded image (a 2-D measurement)
We designed an imaging model that synchronously applies aperture coding and pixel-wise exposure coding within a single exposure time.
The observed image is then fed to a convolutional neural network (CNN) for light-field reconstruction, which is jointly trained with the camera-side coding patterns.
arXiv Detail & Related papers (2022-04-26T06:00:02Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Real-time dense 3D Reconstruction from monocular video data captured by
low-cost UAVs [0.3867363075280543]
Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency.
In contrast to most real-time capable approaches, our approach does not need an explicit depth sensor.
By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content.
arXiv Detail & Related papers (2021-04-21T13:12:17Z) - Relighting Images in the Wild with a Self-Supervised Siamese
Auto-Encoder [62.580345486483886]
We propose a self-supervised method for image relighting of single view images in the wild.
The method is based on an auto-encoder which deconstructs an image into two separate encodings.
We train our model on large-scale datasets such as Youtube 8M and CelebA.
arXiv Detail & Related papers (2020-12-11T16:08:50Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.