Combining Generative and Geometry Priors for Wide-Angle Portrait Correction
- URL: http://arxiv.org/abs/2410.09911v1
- Date: Sun, 13 Oct 2024 16:36:52 GMT
- Title: Combining Generative and Geometry Priors for Wide-Angle Portrait Correction
- Authors: Lan Yao, Chaofeng Chen, Xiaoming Li, Zifei Yan, Wangmeng Zuo,
- Abstract summary: We propose encapsulating the generative face prior as a guided natural manifold to facilitate the correction of facial regions.
A notable central symmetry relationship exists in the non-face background, yet it has not been explored in the correction process.
This geometry prior motivates us to introduce a novel constraint to explicitly enforce symmetry throughout the correction process.
- Score: 54.448014761978975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wide-angle lens distortion in portrait photography presents a significant challenge for capturing photo-realistic and aesthetically pleasing images. Such distortions are especially noticeable in facial regions. In this work, we propose encapsulating the generative face prior as a guided natural manifold to facilitate the correction of facial regions. Moreover, a notable central symmetry relationship exists in the non-face background, yet it has not been explored in the correction process. This geometry prior motivates us to introduce a novel constraint to explicitly enforce symmetry throughout the correction process, thereby contributing to a more visually appealing and natural correction in the non-face region. Experiments demonstrate that our approach outperforms previous methods by a large margin, excelling not only in quantitative measures such as line straightness and shape consistency metrics but also in terms of perceptual visual quality. All the code and models are available at https://github.com/Dev-Mrha/DualPriorsCorrection.
Related papers
- RecDiffusion: Rectangling for Image Stitching with Diffusion Models [53.824503710254206]
We introduce a novel diffusion-based learning framework, textbfRecDiffusion, for image stitching rectangling.
This framework combines Motion Diffusion Models (MDM) to generate motion fields, effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary.
arXiv Detail & Related papers (2024-03-28T06:22:45Z) - How to turn your camera into a perfect pinhole model [0.38233569758620056]
We propose a novel approach that involves a pre-processing step to remove distortions from images.
Our method does not need to assume any distortion model and can be applied to severely warped images.
This model allows for a serious upgrade of many algorithms and applications.
arXiv Detail & Related papers (2023-09-20T13:54:29Z) - DisCO: Portrait Distortion Correction with Perspective-Aware 3D GANs [24.483597004603812]
Close-up facial images captured at short distances often suffer from perspective distortion.
We propose a simple yet effective method for correcting perspective distortions in a single close-up face.
arXiv Detail & Related papers (2023-02-23T18:59:56Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - Deep Rectangling for Image Stitching: A Learning Baseline [57.76737888499145]
We build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-03-08T03:34:10Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - Practical Wide-Angle Portraits Correction with Deep Structured Models [17.62752136436382]
This paper introduces the first deep learning based approach to remove perspective distortions from photos.
Given a wide-angle portrait as input, we build a cascaded network consisting of a LineNet, a ShapeNet, and a transition module.
For the quantitative evaluation, we introduce two novel metrics, line consistency and face congruence.
arXiv Detail & Related papers (2021-04-26T10:47:35Z) - Vanishing Point Guided Natural Image Stitching [13.307030394454216]
We propose a novel natural image stitching method, which takes into account the guidance of vanishing points to tackle the mentioned failures.
Inspired by a vital observation that mutually vanishing points in Manhattan world can provide useful orientation clues, we design a scheme to effectively estimate prior of image similarity.
Our method achieves state-of-the-art performance in both quantitative and qualitative experiments on natural image stitching.
arXiv Detail & Related papers (2020-04-06T08:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.