RecDiffusion: Rectangling for Image Stitching with Diffusion Models
- URL: http://arxiv.org/abs/2403.19164v1
- Date: Thu, 28 Mar 2024 06:22:45 GMT
- Title: RecDiffusion: Rectangling for Image Stitching with Diffusion Models
- Authors: Tianhao Zhou, Haipeng Li, Ziyi Wang, Ao Luo, Chen-Lin Zhang, Jiajun Li, Bing Zeng, Shuaicheng Liu,
- Abstract summary: We introduce a novel diffusion-based learning framework, textbfRecDiffusion, for image stitching rectangling.
This framework combines Motion Diffusion Models (MDM) to generate motion fields, effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary.
- Score: 53.824503710254206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image stitching from different captures often results in non-rectangular boundaries, which is often considered unappealing. To solve non-rectangular boundaries, current solutions involve cropping, which discards image content, inpainting, which can introduce unrelated content, or warping, which can distort non-linear features and introduce artifacts. To overcome these issues, we introduce a novel diffusion-based learning framework, \textbf{RecDiffusion}, for image stitching rectangling. This framework combines Motion Diffusion Models (MDM) to generate motion fields, effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary. Followed by Content Diffusion Models (CDM) for image detail refinement. Notably, our sampling process utilizes a weighted map to identify regions needing correction during each iteration of CDM. Our RecDiffusion ensures geometric accuracy and overall visual appeal, surpassing all previous methods in both quantitative and qualitative measures when evaluated on public benchmarks. Code is released at https://github.com/lhaippp/RecDiffusion.
Related papers
- Modification Takes Courage: Seamless Image Stitching via Reference-Driven Inpainting [0.17975553762582286]
Current image stitching methods produce noticeable seams in challenging scenarios such as uneven hue and large parallax.
We propose the Reference-Driven Inpainting Stitcher (RDIStitcher) to reformulate the image fusion and rectangling as a reference-based inpainting model.
We present the Multimodal Large Language Models (MLLMs)-based metrics, offering a new perspective on evaluating stitched image quality.
arXiv Detail & Related papers (2024-11-15T16:05:01Z) - Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - Towards Coherent Image Inpainting Using Denoising Diffusion Implicit
Models [43.83732051916894]
We propose COPAINT, which can coherently inpaint the whole image without introducing mismatches.
COPAINT also uses the Bayesian framework to jointly modify both revealed and unrevealed regions.
Our experiments verify that COPAINT can outperform the existing diffusion-based methods under both objective and subjective metrics.
arXiv Detail & Related papers (2023-04-06T18:35:13Z) - Parallax-Tolerant Unsupervised Deep Image Stitching [57.76737888499145]
We propose UDIS++, a parallax-tolerant unsupervised deep image stitching technique.
First, we propose a robust and flexible warp to model the image registration from global homography to local thin-plate spline motion.
To further eliminate the parallax artifacts, we propose to composite the stitched image seamlessly by unsupervised learning for seam-driven composition masks.
arXiv Detail & Related papers (2023-02-16T10:40:55Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - Conditional Injective Flows for Bayesian Imaging [18.561430512510956]
Injectivity reduces memory footprint and training time while low-dimensional latent space together with architectural innovations.
C-Trumpets enable fast approximation of point estimates like MMSE or MAP as well as physically-meaningful uncertainty quantification.
arXiv Detail & Related papers (2022-04-15T22:26:21Z) - Automatic Registration of Images with Inconsistent Content Through
Line-Support Region Segmentation and Geometrical Outlier Removal [17.90609572352273]
This paper proposes an automatic image registration approach through line-support region segmentation and geometrical outlier removal (ALRS-GOR)
It is designed to address the problems associated with the registration of images with affine deformations and inconsistent content.
Various image sets have been considered for the evaluation of the proposed approach, including aerial images with simulated affine deformations.
arXiv Detail & Related papers (2022-04-02T10:47:16Z) - Deep Rectangling for Image Stitching: A Learning Baseline [57.76737888499145]
We build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-03-08T03:34:10Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.