Prune and Repaint: Content-Aware Image Retargeting for any Ratio
- URL: http://arxiv.org/abs/2410.22865v1
- Date: Wed, 30 Oct 2024 10:02:42 GMT
- Title: Prune and Repaint: Content-Aware Image Retargeting for any Ratio
- Authors: Feihong Shen, Chao Li, Yifeng Geng, Yongjian Deng, Hao Chen,
- Abstract summary: We propose a content-aware method called PruneRepaint to balance the preservation of key semantics and image quality.
By focusing on the content and structure of the foreground, our PruneRepaint approach adaptively avoids key content loss and deformation.
- Score: 8.665919238538143
- License:
- Abstract: Image retargeting is the task of adjusting the aspect ratio of images to suit different display devices or presentation environments. However, existing retargeting methods often struggle to balance the preservation of key semantics and image quality, resulting in either deformation or loss of important objects, or the introduction of local artifacts such as discontinuous pixels and inconsistent regenerated content. To address these issues, we propose a content-aware retargeting method called PruneRepaint. It incorporates semantic importance for each pixel to guide the identification of regions that need to be pruned or preserved in order to maintain key semantics. Additionally, we introduce an adaptive repainting module that selects image regions for repainting based on the distribution of pruned pixels and the proportion between foreground size and target aspect ratio, thus achieving local smoothness after pruning. By focusing on the content and structure of the foreground, our PruneRepaint approach adaptively avoids key content loss and deformation, while effectively mitigating artifacts with local repainting. We conduct experiments on the public RetargetMe benchmark and demonstrate through objective experimental results and subjective user studies that our method outperforms previous approaches in terms of preserving semantics and aesthetics, as well as better generalization across diverse aspect ratios. Codes will be available at https://github.com/fhshen2022/PruneRepaint.
Related papers
- Dense Feature Interaction Network for Image Inpainting Localization [28.028361409524457]
Inpainting can be used to conceal or alter image contents in malicious manipulation of images.
Existing methods mostly rely on a basic encoder-decoder structure, which often results in a high number of false positives.
In this paper, we describe a new method for inpainting detection based on a Dense Feature Interaction Network (DeFI-Net)
arXiv Detail & Related papers (2024-08-05T02:35:13Z) - RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting [63.567363455092234]
RefFusion is a novel 3D inpainting method based on a multi-scale personalization of an image inpainting diffusion model to the given reference view.
Our framework achieves state-of-the-art results for object removal while maintaining high controllability.
arXiv Detail & Related papers (2024-04-16T17:50:02Z) - OAIR: Object-Aware Image Retargeting Using PSO and Aesthetic Quality
Assessment [11.031841470875571]
Previous image methods create outputs that suffer from artifacts and distortions.
Simultaneous resizing of the foreground and background causes changes in the aspect ratios of the objects.
We propose a method that overcomes these problems.
arXiv Detail & Related papers (2022-09-11T07:16:59Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Fast Hybrid Image Retargeting [0.0]
We propose a method that quantifies and limits warping distortions with the use of content-aware cropping.
Our method outperforms recent approaches, while running in a fraction of their execution time.
arXiv Detail & Related papers (2022-03-25T11:46:06Z) - A Wasserstein GAN for Joint Learning of Inpainting and its Spatial
Optimisation [3.4392739159262145]
We propose the first generative adversarial network for spatial inpainting data optimisation.
In contrast to previous approaches, it allows joint training of an inpainting generator and a corresponding mask network.
This yields significant improvements in visual quality and speed over conventional models and also outperforms current optimisation networks.
arXiv Detail & Related papers (2022-02-11T14:02:36Z) - Generative Memory-Guided Semantic Reasoning Model for Image Inpainting [34.092255842494396]
We propose the Generative Memory-Guided Semantic Reasoning Model (GM-SRM) for image inpainting.
The proposed GM-SRM learns the intra-image priors from the known regions, but also distills the inter-image reasoning priors to infer the content of the corrupted regions.
Extensive experiments on Paris Street View, CelebA-HQ, and Places2 benchmarks demonstrate that our GM-SRM outperforms the state-of-the-art methods for image inpainting.
arXiv Detail & Related papers (2021-10-01T08:37:34Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - ReGO: Reference-Guided Outpainting for Scenery Image [82.21559299694555]
generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
arXiv Detail & Related papers (2021-06-20T02:34:55Z) - Context-Aware Image Inpainting with Learned Semantic Priors [100.99543516733341]
We introduce pretext tasks that are semantically meaningful to estimating the missing contents.
We propose a context-aware image inpainting model, which adaptively integrates global semantics and local features.
arXiv Detail & Related papers (2021-06-14T08:09:43Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.