Any-resolution Training for High-resolution Image Synthesis
- URL: http://arxiv.org/abs/2204.07156v1
- Date: Thu, 14 Apr 2022 17:59:31 GMT
- Title: Any-resolution Training for High-resolution Image Synthesis
- Authors: Lucy Chai, Michael Gharbi, Eli Shechtman, Phillip Isola, Richard Zhang
- Abstract summary: Generative models operate at fixed resolution, even though natural images come in a variety of sizes.
We argue that every pixel matters and create datasets with variable-size images, collected at their native resolutions.
We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.
- Score: 55.19874755679901
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generative models operate at fixed resolution, even though natural images
come in a variety of sizes. As high-resolution details are downsampled away,
and low-resolution images are discarded altogether, precious supervision is
lost. We argue that every pixel matters and create datasets with variable-size
images, collected at their native resolutions. Taking advantage of this data is
challenging; high-resolution processing is costly, and current architectures
can only process fixed-resolution data. We introduce continuous-scale training,
a process that samples patches at random scales to train a new generator with
variable output resolutions. First, conditioning the generator on a target
scale allows us to generate higher resolutions images than previously possible,
without adding layers to the model. Second, by conditioning on continuous
coordinates, we can sample patches that still obey a consistent global layout,
which also allows for scalable training at higher resolutions. Controlled FFHQ
experiments show our method takes advantage of the multi-resolution training
data better than discrete multi-scale approaches, achieving better FID scores
and cleaner high-frequency details. We also train on other natural image
domains including churches, mountains, and birds, and demonstrate arbitrary
scale synthesis with both coherent global layouts and realistic local details,
going beyond 2K resolution in our experiments. Our project page is available
at: https://chail.github.io/anyres-gan/.
Related papers
- Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models [44.437693135170576]
We propose a new framework, LMM with Sophisticated Tasks, Local image compression, and Mixture of global Experts (SliME)
We extract contextual information from the global view using a mixture of adapters, based on the observation that different adapters excel at different tasks.
The proposed method achieves leading performance across various benchmarks with only 2 million training data.
arXiv Detail & Related papers (2024-06-12T17:59:49Z) - Learning Dual-Level Deformable Implicit Representation for Real-World Scale Arbitrary Super-Resolution [81.74583887661794]
We build a new real-world super-resolution benchmark with both integer and non-integer scaling factors.
We propose a Dual-level Deformable Implicit Representation (DDIR) to solve real-world scale arbitrary super-resolution.
Our trained model achieves state-of-the-art performance on the RealArbiSR and RealSR benchmarks for real-world scale arbitrary super-resolution.
arXiv Detail & Related papers (2024-03-16T13:44:42Z) - Continuous Cross-resolution Remote Sensing Image Change Detection [28.466756872079472]
Real-world applications raise the need for cross-resolution change detection, aka, CD based on bitemporal images with different spatial resolutions.
We propose scale-invariant learning to enforce the model consistently predicting HR results given synthesized samples of varying resolution differences.
Our method significantly outperforms several vanilla CD methods and two cross-resolution CD methods on three datasets.
arXiv Detail & Related papers (2023-05-24T04:57:24Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Resolution based Feature Distillation for Cross Resolution Person
Re-Identification [17.86505685442293]
Person re-identification (re-id) aims to retrieve images of same identities across different camera views.
Resolution mismatch occurs due to varying distances between person of interest and cameras.
We propose a Resolution based Feature Distillation (RFD) approach to overcome the problem of multiple resolutions.
arXiv Detail & Related papers (2021-09-16T11:07:59Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.