DualFocus: Depth from Focus with Spatio-Focal Dual Variational Constraints
- URL: http://arxiv.org/abs/2509.21992v1
- Date: Fri, 26 Sep 2025 07:15:36 GMT
- Title: DualFocus: Depth from Focus with Spatio-Focal Dual Variational Constraints
- Authors: Sungmin Woo, Sangyoun Lee,
- Abstract summary: We present DualFocus, a novel DFF framework that leverages the focal stack's unique gradient patterns induced by focus variation.<n>We show that DualFocus consistently outperforms state-of-the-art methods in both depth accuracy and perceptual quality.
- Score: 26.266318338511876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth-from-Focus (DFF) enables precise depth estimation by analyzing focus cues across a stack of images captured at varying focal lengths. While recent learning-based approaches have advanced this field, they often struggle in complex scenes with fine textures or abrupt depth changes, where focus cues may become ambiguous or misleading. We present DualFocus, a novel DFF framework that leverages the focal stack's unique gradient patterns induced by focus variation, jointly modeling focus changes over spatial and focal dimensions. Our approach introduces a variational formulation with dual constraints tailored to DFF: spatial constraints exploit gradient pattern changes across focus levels to distinguish true depth edges from texture artifacts, while focal constraints enforce unimodal, monotonic focus probabilities aligned with physical focus behavior. These inductive biases improve robustness and accuracy in challenging regions. Comprehensive experiments on four public datasets demonstrate that DualFocus consistently outperforms state-of-the-art methods in both depth accuracy and perceptual quality.
Related papers
- Robust Shape from Focus via Multiscale Directional Dilated Laplacian and Recurrent Network [1.7188280334580195]
Shape-from-Focus (SFF) is a passive depth estimation technique that infers scene depth by analyzing focus variations in a focal stack.<n>We propose a hybrid framework that computes multi-scale focus volumes using Directional Dilated Laplacian kernels.<n>Our approach achieves superior accuracy and generalization across diverse focal conditions.
arXiv Detail & Related papers (2025-12-11T10:19:52Z) - Fine-grained Defocus Blur Control for Generative Image Models [66.30016220484394]
Current text-to-image diffusion models excel at generating diverse, high-quality images.<n>We introduce a novel text-to-image diffusion framework that leverages camera metadata.<n>Our model enables superior fine-grained control without altering the depicted scene.
arXiv Detail & Related papers (2025-10-07T17:59:15Z) - DiffCamera: Arbitrary Refocusing on Images [55.948229011478304]
We propose DiffCamera, a model that enables flexible refocusing of a created image conditioned on an arbitrary new focus point and a blur level.<n>Experiments demonstrate that DiffCamera supports stable refocusing across a wide range of scenes, providing unprecedented control over DoF adjustments for photography and generative AI applications.
arXiv Detail & Related papers (2025-09-30T17:48:23Z) - Dark Channel-Assisted Depth-from-Defocus from a Single Image [4.005483185111993]
We estimate scene depth from a single defocus-blurred image using the dark channel as a complementary cue.<n>Our method uses the relationship between local defocus blur and contrast variations as depth cues to improve scene structure estimation.
arXiv Detail & Related papers (2025-06-07T03:49:26Z) - Adjust Your Focus: Defocus Deblurring From Dual-Pixel Images Using Explicit Multi-Scale Cross-Correlation [1.661922907889139]
Defocus blur is a common problem in photography.<n>Recent work exploited dual-pixel (DP) image information to solve the problem.<n>We propose an explicit cross-correlation between the two DP views to guide the network for appropriate deblurring.
arXiv Detail & Related papers (2025-02-16T05:55:57Z) - Depth and DOF Cues Make A Better Defocus Blur Detector [27.33757097343283]
Defocus blur detection (DBD) separates in-focus and out-of-focus regions in an image.
Previous approaches mistakenly mistook homogeneous areas in focus for defocus blur regions.
We propose an approach called D-DFFNet, which incorporates depth and DOF cues in an implicit manner.
arXiv Detail & Related papers (2023-06-20T07:03:37Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - DAQE: Enhancing the Quality of Compressed Images by Finding the Secret
of Defocus [52.795238584413]
Existing quality enhancement approaches for compressed images neglect the inherent characteristic of defocus, resulting in inferior performance.
We propose a novel dynamic region-based deep learning architecture of the DAQE approach, which considers the region-wise defocus difference of compressed images in two aspects.
The DAQE approach learns to separately enhance diverse texture patterns for the regions with different defocus values, such that texture-wise one-on-one enhancement can be achieved.
arXiv Detail & Related papers (2022-11-20T14:08:47Z) - Deep Depth from Focus with Differential Focus Volume [17.505649653615123]
We propose a convolutional neural network (CNN) to find the best-focused pixels in a focal stack and infer depth from the focus estimation.
The key innovation of the network is the novel deep differential focus volume (DFV)
arXiv Detail & Related papers (2021-12-03T04:49:51Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.