Learning Edge-Preserved Image Stitching from Large-Baseline Deep
Homography
- URL: http://arxiv.org/abs/2012.06194v1
- Date: Fri, 11 Dec 2020 08:43:30 GMT
- Title: Learning Edge-Preserved Image Stitching from Large-Baseline Deep
Homography
- Authors: Lang Nie, Chunyu Lin, Kang Liao, Yao Zhao
- Abstract summary: We propose an image stitching learning framework, which consists of a large-baseline deep homography module and an edge-preserved deformation module.
Our method is superior to the existing learning method and shows competitive performance with state-of-the-art traditional methods.
- Score: 32.28310831466225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image stitching is a classical and crucial technique in computer vision,
which aims to generate the image with a wide field of view. The traditional
methods heavily depend on the feature detection and require that scene features
be dense and evenly distributed in the image, leading to varying ghosting
effects and poor robustness. Learning methods usually suffer from fixed view
and input size limitations, showing a lack of generalization ability on other
real datasets. In this paper, we propose an image stitching learning framework,
which consists of a large-baseline deep homography module and an edge-preserved
deformation module. First, we propose a large-baseline deep homography module
to estimate the accurate projective transformation between the reference image
and the target image in different scales of features. After that, an
edge-preserved deformation module is designed to learn the deformation rules of
image stitching from edge to content, eliminating the ghosting effects as much
as possible. In particular, the proposed learning framework can stitch images
of arbitrary views and input sizes, thus contribute to a supervised deep image
stitching method with excellent generalization capability in other real images.
Experimental results demonstrate that our homography module significantly
outperforms the existing deep homography methods in the large baseline scenes.
In image stitching, our method is superior to the existing learning method and
shows competitive performance with state-of-the-art traditional methods.
Related papers
- Deep ContourFlow: Advancing Active Contours with Deep Learning [3.9948520633731026]
We present a framework for both unsupervised and one-shot approaches for image segmentation.
It is capable of capturing complex object boundaries without the need for extensive labeled training data.
This is particularly required in histology, a field facing a significant shortage of annotations.
arXiv Detail & Related papers (2024-07-15T13:12:34Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - Generalizable Person Re-Identification via Viewpoint Alignment and
Fusion [74.30861504619851]
This work proposes to use a 3D dense pose estimation model and a texture mapping module to map pedestrian images to canonical view images.
Due to the imperfection of the texture mapping module, the canonical view images may lose the discriminative detail clues from the original images.
We show that our method can lead to superior performance over the existing approaches in various evaluation settings.
arXiv Detail & Related papers (2022-12-05T16:24:09Z) - Weakly-Supervised Stitching Network for Real-World Panoramic Image
Generation [17.19847723103836]
We develop a weakly-supervised learning mechanism to train the stitching model without requiring genuine ground truth images.
In particular, our model consists of color consistency corrections, warping, and blending, and is trained by perceptual and SSIM losses.
The effectiveness of the proposed algorithm is verified on two real-world stitching datasets.
arXiv Detail & Related papers (2022-09-13T13:01:47Z) - Pixel-wise Deep Image Stitching [21.824319551526294]
Image stitching aims at stitching the images taken from different viewpoints into an image with a wider field of view.
Existing methods warp the target image to the reference image using the estimated warp function.
We propose a novel deep image stitching framework exploiting the pixel-wise warp field to handle the large-parallax problem.
arXiv Detail & Related papers (2021-12-12T07:28:48Z) - BoundarySqueeze: Image Segmentation as Boundary Squeezing [104.43159799559464]
We propose a novel method for fine-grained high-quality image segmentation of both objects and scenes.
Inspired by dilation and erosion from morphological image processing techniques, we treat the pixel level segmentation problems as squeezing object boundary.
Our method yields large gains on COCO, Cityscapes, for both instance and semantic segmentation and outperforms previous state-of-the-art PointRend in both accuracy and speed under the same setting.
arXiv Detail & Related papers (2021-05-25T04:58:51Z) - Practical Wide-Angle Portraits Correction with Deep Structured Models [17.62752136436382]
This paper introduces the first deep learning based approach to remove perspective distortions from photos.
Given a wide-angle portrait as input, we build a cascaded network consisting of a LineNet, a ShapeNet, and a transition module.
For the quantitative evaluation, we introduce two novel metrics, line consistency and face congruence.
arXiv Detail & Related papers (2021-04-26T10:47:35Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.