Improving Fast Auto-Focus with Event Polarity
- URL: http://arxiv.org/abs/2303.08611v2
- Date: Mon, 3 Jul 2023 04:34:13 GMT
- Title: Improving Fast Auto-Focus with Event Polarity
- Authors: Yuhan Bao, Lei Sun, Yuqin Ma, Diyang Gu, Kaiwei Wang
- Abstract summary: This paper presents a new high-speed and accurate event-based focusing algorithm.
Experiments on the public event-based autofocus dataset (EAD) show the robustness of the model.
precise focus with less than one depth of focus is achieved within 0.004 seconds on our self-built high-speed focusing platform.
- Score: 5.376511424333543
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast and accurate auto-focus in adverse conditions remains an arduous task.
The emergence of event cameras has opened up new possibilities for addressing
the challenge. This paper presents a new high-speed and accurate event-based
focusing algorithm. Specifically, the symmetrical relationship between the
event polarities in focusing is investigated, and the event-based focus
evaluation function is proposed based on the principles of the event cameras
and the imaging model in the focusing process. Comprehensive experiments on the
public event-based autofocus dataset (EAD) show the robustness of the model.
Furthermore, precise focus with less than one depth of focus is achieved within
0.004 seconds on our self-built high-speed focusing platform. The dataset and
code will be made publicly available.
Related papers
- Learning Monocular Depth from Focus with Event Focal Stack [6.200121342586474]
We propose the EDFF Network to estimate sparse depth from the Event Focal Stack.
We use the event voxel grid to encode intensity change information and project event time surface into the depth domain.
A Focal-Distance-guided Cross-Modal Attention Module is presented to fuse the information mentioned above.
arXiv Detail & Related papers (2024-05-11T07:54:49Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Autofocus for Event Cameras [21.972388081563267]
We develop a novel event-based autofocus framework consisting of an event-specific focus measure called event rate (ER) and a robust search strategy called event-based golden search (EGS)
The experiments on this dataset and additional real-world scenarios demonstrated the superiority of our method over state-of-the-art approaches in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-03-23T10:46:33Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Deep Depth from Focus with Differential Focus Volume [17.505649653615123]
We propose a convolutional neural network (CNN) to find the best-focused pixels in a focal stack and infer depth from the focus estimation.
The key innovation of the network is the novel deep differential focus volume (DFV)
arXiv Detail & Related papers (2021-12-03T04:49:51Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Decentralized Autofocusing System with Hierarchical Agents [2.7716102039510564]
We propose a hierarchical multi-agent deep reinforcement learning approach for intelligently controlling the camera and the lens focusing settings.
The algorithm relies on the latent representation of the camera's stream and, thus, it is the first method to allow a completely no-reference tuning of the camera.
arXiv Detail & Related papers (2021-08-29T13:45:15Z) - An End-to-End Autofocus Camera for Iris on the Move [48.14011526385088]
In this paper, we introduce a novel rapid autofocus camera for active refocusing of the iris area ofthe moving objects using a focus-tunable lens.
Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically.
The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes.
arXiv Detail & Related papers (2021-06-29T03:00:39Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.