One-Step Event-Driven High-Speed Autofocus
- URL: http://arxiv.org/abs/2503.01214v1
- Date: Mon, 03 Mar 2025 06:25:09 GMT
- Title: One-Step Event-Driven High-Speed Autofocus
- Authors: Yuhan Bao, Shaohua Gao, Wenyong Li, Kaiwei Wang,
- Abstract summary: Event Laplacian Product (ELP) focus detection function combines event data with grayscale Laplacian information, redefining focus search as a detection task.<n>This innovation enables the first one-step event-driven autofocus, cutting focusing time by up to two-thirds and reducing focusing error by 24 times on the DAVIS346 dataset and 22 times on the EVK4 dataset.
- Score: 2.5565719921628443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-speed autofocus in extreme scenes remains a significant challenge. Traditional methods rely on repeated sampling around the focus position, resulting in ``focus hunting''. Event-driven methods have advanced focusing speed and improved performance in low-light conditions; however, current approaches still require at least one lengthy round of ``focus hunting'', involving the collection of a complete focus stack. We introduce the Event Laplacian Product (ELP) focus detection function, which combines event data with grayscale Laplacian information, redefining focus search as a detection task. This innovation enables the first one-step event-driven autofocus, cutting focusing time by up to two-thirds and reducing focusing error by 24 times on the DAVIS346 dataset and 22 times on the EVK4 dataset. Additionally, we present an autofocus pipeline tailored for event-only cameras, achieving accurate results across a range of challenging motion and lighting conditions. All datasets and code will be made publicly available.
Related papers
- Dual-Camera All-in-Focus Neural Radiance Fields [54.19848043744996]
We present the first framework capable of synthesizing the all-in-focus neural radiance field (NeRF) from inputs without manual refocusing.
We introduce the dual-camera from smartphones, where the ultra-wide camera has a wider depth-of-field (DoF) and the main camera possesses a higher resolution.
The dual camera pair saves the high-fidelity details from the main camera and uses the ultra-wide camera's deep DoF as reference for all-in-focus restoration.
arXiv Detail & Related papers (2025-04-23T11:55:02Z) - SparseFocus: Learning-based One-shot Autofocus for Microscopy with Sparse Content [21.268550523841117]
Autofocus is necessary for high- throughput and real-time scanning in microscopic imaging.<n>Recent learning-based approaches have demonstrated remarkable efficacy in a one-shot setting.<n>We propose a content-based solution, named SparseFocus, featuring a novel two-stage pipeline.
arXiv Detail & Related papers (2025-02-10T13:31:32Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Improving Fast Auto-Focus with Event Polarity [5.376511424333543]
This paper presents a new high-speed and accurate event-based focusing algorithm.
Experiments on the public event-based autofocus dataset (EAD) show the robustness of the model.
precise focus with less than one depth of focus is achieved within 0.004 seconds on our self-built high-speed focusing platform.
arXiv Detail & Related papers (2023-03-15T13:36:13Z) - Learning to See Through with Events [37.19232535463858]
This paper presents an Event-based SAI (E-SAI) method by relying on asynchronous events with extremely low latency and high dynamic range.
The collected events are first refocused by a Re-focus-Net module to align in-focus events while scattering out off-focus ones.
A hybrid network composed of spiking neural networks (SNNs) and convolutional neural networks (CNNs) is proposed to encode the foreground-temporal information from the refocused events and reconstruct a visual image of the occluded scenes.
arXiv Detail & Related papers (2022-12-05T12:51:22Z) - Autofocus for Event Cameras [21.972388081563267]
We develop a novel event-based autofocus framework consisting of an event-specific focus measure called event rate (ER) and a robust search strategy called event-based golden search (EGS)
The experiments on this dataset and additional real-world scenarios demonstrated the superiority of our method over state-of-the-art approaches in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-03-23T10:46:33Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - An End-to-End Autofocus Camera for Iris on the Move [48.14011526385088]
In this paper, we introduce a novel rapid autofocus camera for active refocusing of the iris area ofthe moving objects using a focus-tunable lens.
Our end-to-end computational algorithm can predict the best focus position from one single blurred image and generate a lens diopter control signal automatically.
The results demonstrate the advantages of our proposed camera for biometric perception in static and dynamic scenes.
arXiv Detail & Related papers (2021-06-29T03:00:39Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.