Generative Refocusing: Flexible Defocus Control from a Single Image
- URL: http://arxiv.org/abs/2512.16923v1
- Date: Thu, 18 Dec 2025 18:59:59 GMT
- Title: Generative Refocusing: Flexible Defocus Control from a Single Image
- Authors: Chun-Wei Tuan Mu, Jia-Bin Huang, Yu-Lun Liu,
- Abstract summary: We introduce Generative Refocusing, a two-step process that uses DeNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh.<n>Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks.
- Score: 12.798805351731668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Depth-of-field control is essential in photography, but getting the perfect focus often takes several tries or special equipment. Single-image refocusing is still difficult. It involves recovering sharp content and creating realistic bokeh. Current methods have significant drawbacks. They need all-in-focus inputs, depend on synthetic data from simulators, and have limited control over aperture. We introduce Generative Refocusing, a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. Our main innovation is semi-supervised training. This method combines synthetic paired data with unpaired real bokeh images, using EXIF metadata to capture real optical characteristics beyond what simulators can provide. Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks. Additionally, our Generative Refocusing allows text-guided adjustments and custom aperture shapes.
Related papers
- Learning to Refocus with Video Diffusion Models [10.749713029715226]
We introduce a novel method for realistic post-capture refocusing using video diffusion models.<n>From a single defocused image, our approach generates a perceptually accurate focal stack, represented as a video sequence.<n>Our method consistently outperforms existing approaches in both perceptual quality and robustness across challenging scenarios.
arXiv Detail & Related papers (2025-12-22T19:29:57Z) - BokehFlow: Depth-Free Controllable Bokeh Rendering via Flow Matching [33.101056425502584]
Bokeh rendering simulates the shallow depth-of-field effect in photography, enhancing visual aesthetics and guiding viewer attention to regions of interest.<n>We propose BokehFlow, a framework for controllable bokeh rendering based on flow matching.<n>BokehFlow directly synthesizes photorealistic bokeh effects from all-in-focus images, eliminating the need for depth inputs.
arXiv Detail & Related papers (2025-11-19T03:18:58Z) - Fine-grained Defocus Blur Control for Generative Image Models [66.30016220484394]
Current text-to-image diffusion models excel at generating diverse, high-quality images.<n>We introduce a novel text-to-image diffusion framework that leverages camera metadata.<n>Our model enables superior fine-grained control without altering the depicted scene.
arXiv Detail & Related papers (2025-10-07T17:59:15Z) - DiffCamera: Arbitrary Refocusing on Images [55.948229011478304]
We propose DiffCamera, a model that enables flexible refocusing of a created image conditioned on an arbitrary new focus point and a blur level.<n>Experiments demonstrate that DiffCamera supports stable refocusing across a wide range of scenes, providing unprecedented control over DoF adjustments for photography and generative AI applications.
arXiv Detail & Related papers (2025-09-30T17:48:23Z) - Bokehlicious: Photorealistic Bokeh Rendering with Controllable Apertures [51.16022611377722]
Bokeh rendering methods play a key role in creating the visually appealing, softly blurred backgrounds seen in professional photography.<n>We propose Bokehlicious, a highly efficient network that provides intuitive control over Bokeh strength through an Aperture-Aware Attention mechanism.<n>We present RealBokeh, a novel dataset featuring 23,000 high-resolution (24-MP) images captured by professional photographers.
arXiv Detail & Related papers (2025-03-20T12:00:45Z) - Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and
radiance priors [26.38833313692807]
Bokeh rendering mimics aesthetic shallow depth-of-field (DoF) in professional photography.
Existing methods suffer from simple flat background blur and blurred in-focus regions.
We present a Defocus to Focus (D2F) framework to learn realistic bokeh rendering.
arXiv Detail & Related papers (2023-06-07T15:15:13Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.