An End-to-End Autofocus Camera for Iris on the Move
- URL: http://arxiv.org/abs/2106.15069v1
- Date: Tue, 29 Jun 2021 03:00:39 GMT
- Title: An End-to-End Autofocus Camera for Iris on the Move
- Authors: Leyuan Wang, Kunbo Zhang, Yunlong Wang, Zhenan Sun
- Abstract summary: In this paper, we introduce a novel rapid autofocus camera for active refocusing of the iris area ofthe moving objects using a focus-tunable lens.
Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically.
The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes.
- Score: 48.14011526385088
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: For distant iris recognition, a long focal length lens is generally used to
ensure the resolution ofiris images, which reduces the depth of field and leads
to potential defocus blur. To accommodate users at different distances, it is
necessary to control focus quickly and accurately. While for users in motion,
it is expected to maintain the correct focus on the iris area continuously. In
this paper, we introduced a novel rapid autofocus camera for active refocusing
ofthe iris area ofthe moving objects using a focus-tunable lens. Our end-to-end
computational algorithm can predict the best focus position from one single
blurred image and generate a lens diopter control signal automatically. This
scene-based active manipulation method enables real-time focus tracking of the
iris area ofa moving object. We built a testing bench to collect real-world
focal stacks for evaluation of the autofocus methods. Our camera has reached an
autofocus speed ofover 50 fps. The results demonstrate the advantages of our
proposed camera for biometric perception in static and dynamic scenes. The code
is available at https://github.com/Debatrix/AquulaCam.
Related papers
- $\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus [38.24734623691387]
We propose a system for defocus control for synthetically varying camera aperture, focus distance and arbitrary defocus effects.
Our key insight is to leverage real-world smartphone camera dataset by using image refocus as a proxy task for learning to control defocus.
We demonstrate creative post-capture defocus control enabled by our method, including tilt-shift and content-based defocus effects.
arXiv Detail & Related papers (2023-04-06T17:59:58Z) - Improving Fast Auto-Focus with Event Polarity [5.376511424333543]
This paper presents a new high-speed and accurate event-based focusing algorithm.
Experiments on the public event-based autofocus dataset (EAD) show the robustness of the model.
precise focus with less than one depth of focus is achieved within 0.004 seconds on our self-built high-speed focusing platform.
arXiv Detail & Related papers (2023-03-15T13:36:13Z) - Deep Depth from Focal Stack with Defocus Model for Camera-Setting
Invariance [19.460887007137607]
We propose a learning-based depth from focus/defocus (DFF) which takes a focal stack as input for estimating scene depth.
We show that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.
arXiv Detail & Related papers (2022-02-26T04:21:08Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Geometric Scene Refocusing [9.198471344145092]
We study the fine characteristics of images with a shallow depth-of-field in the context of focal stacks.
We identify in-focus pixels, dual-focus pixels, pixels that exhibit bokeh and spatially-varying blur kernels between focal slices.
We present a comprehensive algorithm for post-capture refocusing in a geometrically correct manner.
arXiv Detail & Related papers (2020-12-20T06:33:55Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.