Progressively Complementary Network for Fisheye Image Rectification
Using Appearance Flow
- URL: http://arxiv.org/abs/2103.16026v2
- Date: Wed, 31 Mar 2021 01:56:51 GMT
- Title: Progressively Complementary Network for Fisheye Image Rectification
Using Appearance Flow
- Authors: Shangrong Yang, Chunyu Lin, Kang Liao, Chunjie Zhang, Yao Zhao
- Abstract summary: We propose a feature-level correction scheme for distortion rectification network.
We embed a correction layer in skip-connection and leverage the appearance flows in different layers to pre-correct the image features.
It effectively reduces the burden of the decoder by separating content reconstruction and structure correction.
- Score: 41.465257944454756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distortion rectification is often required for fisheye images. The
generation-based method is one mainstream solution due to its label-free
property, but its naive skip-connection and overburdened decoder will cause
blur and incomplete correction. First, the skip-connection directly transfers
the image features, which may introduce distortion and cause incomplete
correction. Second, the decoder is overburdened during simultaneously
reconstructing the content and structure of the image, resulting in vague
performance. To solve these two problems, in this paper, we focus on the
interpretable correction mechanism of the distortion rectification network and
propose a feature-level correction scheme. We embed a correction layer in
skip-connection and leverage the appearance flows in different layers to
pre-correct the image features. Consequently, the decoder can easily
reconstruct a plausible result with the remaining distortion-less information.
In addition, we propose a parallel complementary structure. It effectively
reduces the burden of the decoder by separating content reconstruction and
structure correction. Subjective and objective experiment results on different
datasets demonstrate the superiority of our method.
Related papers
- Spatial-Contextual Discrepancy Information Compensation for GAN
Inversion [67.21442893265973]
We introduce a novel spatial-contextual discrepancy information compensationbased GAN-inversion method (SDIC)
SDIC bridges the gap in image details between the original image and the reconstructed/edited image.
Our proposed method achieves the excellent distortion-editability trade-off at a fast inference speed for both image inversion and editing tasks.
arXiv Detail & Related papers (2023-12-12T08:58:56Z) - Spatiotemporal Deformation Perception for Fisheye Video Rectification [44.332845280150785]
We propose a temporal weighting scheme to get a plausible global optical flow.
We derive the spatial deformation through the flows of fisheye and distorted-free videos.
A temporal deformation aggregator is designed to reconstruct the deformation correlation between frames.
arXiv Detail & Related papers (2023-02-08T08:17:50Z) - Eliminating Contextual Prior Bias for Semantic Image Editing via
Dual-Cycle Diffusion [35.95513392917737]
A novel approach called Dual-Cycle Diffusion generates an unbiased mask to guide image editing.
Our experiments demonstrate the effectiveness of the proposed method, as it significantly improves the D-CLIP score from 0.272 to 0.283.
arXiv Detail & Related papers (2023-02-05T14:30:22Z) - ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing [20.39792009151017]
StyleGAN allows for flexible and plausible editing of generated images by manipulating the semantic-rich latent style space.
Projecting a real image into its latent space encounters an inherent trade-off between inversion quality and editability.
We propose a novel two-phase framework by designating two separate networks to tackle editing and reconstruction respectively.
arXiv Detail & Related papers (2023-01-31T04:38:42Z) - Editing Out-of-domain GAN Inversion via Differential Activations [56.62964029959131]
We propose a novel GAN prior based editing framework to tackle the out-of-domain inversion problem with a composition-decomposition paradigm.
With the aid of the generated Diff-CAM mask, a coarse reconstruction can intuitively be composited by the paired original and edited images.
In the decomposition phase, we further present a GAN prior based deghosting network for separating the final fine edited image from the coarse reconstruction.
arXiv Detail & Related papers (2022-07-17T10:34:58Z) - Deep Rotation Correction without Angle Prior [57.76737888499145]
We propose a new and practical task, named Rotation Correction, to automatically correct the tilt with high content fidelity.
This task can be easily integrated into image editing applications, allowing users to correct the rotated images without any manual operations.
We leverage a neural network to predict the optical flows that can warp the tilted images to be perceptually horizontal.
arXiv Detail & Related papers (2022-07-07T02:46:27Z) - High-Fidelity GAN Inversion for Image Attribute Editing [61.966946442222735]
We present a novel high-fidelity generative adversarial network (GAN) inversion framework that enables attribute editing with image-specific details well-preserved.
With a low bit-rate latent code, previous works have difficulties in preserving high-fidelity details in reconstructed and edited images.
We propose a distortion consultation approach that employs a distortion map as a reference for high-fidelity reconstruction.
arXiv Detail & Related papers (2021-09-14T11:23:48Z) - Generative and Discriminative Learning for Distorted Image Restoration [22.230017059874445]
Liquify is a technique for image editing, which can be used for image distortion.
We propose a novel generative and discriminative learning method based on deep neural networks.
arXiv Detail & Related papers (2020-11-11T14:01:29Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.