Internal Diverse Image Completion
- URL: http://arxiv.org/abs/2212.10280v1
- Date: Sun, 18 Dec 2022 10:02:53 GMT
- Title: Internal Diverse Image Completion
- Authors: Noa Alkobi, Tamar Rott Shaham, Tomer Michaeli
- Abstract summary: We propose a diverse completion method that does not require a training set and can treat arbitrary images from any domain.
Our internal diverse completion (IDC) approach draws inspiration from recent single-image generative models that are trained on multiple scales of a single image.
- Score: 38.068971605321096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image completion is widely used in photo restoration and editing
applications, e.g. for object removal. Recently, there has been a surge of
research on generating diverse completions for missing regions. However,
existing methods require large training sets from a specific domain of
interest, and often fail on general-content images. In this paper, we propose a
diverse completion method that does not require a training set and can thus
treat arbitrary images from any domain. Our internal diverse completion (IDC)
approach draws inspiration from recent single-image generative models that are
trained on multiple scales of a single image, adapting them to the extreme
setting in which only a small portion of the image is available for training.
We illustrate the strength of IDC on several datasets, using both user studies
and quantitative comparisons.
Related papers
- One Diffusion to Generate Them All [54.82732533013014]
OneDiffusion is a versatile, large-scale diffusion model that supports bidirectional image synthesis and understanding.
It enables conditional generation from inputs such as text, depth, pose, layout, and semantic maps.
OneDiffusion allows for multi-view generation, camera pose estimation, and instant personalization using sequential image inputs.
arXiv Detail & Related papers (2024-11-25T12:11:05Z) - Interactive Image Selection and Training for Brain Tumor Segmentation Network [42.62139206176152]
We employ an interactive method for image selection and training based on Feature Learning from Image Markers (FLIM)
The results demonstrated that with our methodology, we could choose a small set of images to train the encoder of a U-shaped network, obtaining performance equal to manual selection and even surpassing the same U-shaped network trained with backpropagation and all training images.
arXiv Detail & Related papers (2024-06-05T13:03:06Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Granularity-aware Adaptation for Image Retrieval over Multiple Tasks [30.505620321478688]
Grappa is an approach that starts from a strong pretrained model, and adapts it to tackle multiple retrieval tasks concurrently.
We reconcile all adaptor sets into a single unified model suited for all retrieval tasks by learning fusion layers.
Results on a benchmark composed of six heterogeneous retrieval tasks show that the unsupervised Grappa model improves the zero-shot performance of a state-of-the-art self-supervised learning model.
arXiv Detail & Related papers (2022-10-05T13:31:52Z) - Universal Model for Multi-Domain Medical Image Retrieval [88.67940265012638]
Medical Image Retrieval (MIR) helps doctors quickly find similar patients' data.
MIR is becoming increasingly helpful due to the wide use of digital imaging modalities.
However, the popularity of various digital imaging modalities in hospitals also poses several challenges to MIR.
arXiv Detail & Related papers (2020-07-14T23:22:04Z) - Multi-Domain Image Completion for Random Missing Input Data [17.53581223279953]
Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities.
Due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources.
We propose a general approach to complete the random missing domain(s) data in real applications.
arXiv Detail & Related papers (2020-07-10T16:38:48Z) - Unifying Specialist Image Embedding into Universal Image Embedding [84.0039266370785]
It is desirable to have a universal deep embedding model applicable to various domains of images.
We propose to distill the knowledge in multiple specialists into a universal embedding to solve this problem.
arXiv Detail & Related papers (2020-03-08T02:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.