Block Shuffle: A Method for High-resolution Fast Style Transfer with
Limited Memory
- URL: http://arxiv.org/abs/2008.03706v1
- Date: Sun, 9 Aug 2020 10:33:21 GMT
- Title: Block Shuffle: A Method for High-resolution Fast Style Transfer with
Limited Memory
- Authors: Weifeng Ma, Zhe Chen, Caoting Ji
- Abstract summary: Fast Style Transfer is a series of Neural Style Transfer algorithms that use feed-forward neural networks to render input images.
Because of the high dimension of the output layer, these networks require much memory for computation.
We propose a novel image synthesis method named emphblock shuffle, which converts a single task with high memory consumption to multiple subtasks with low memory consumption.
- Score: 4.511923587827301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast Style Transfer is a series of Neural Style Transfer algorithms that use
feed-forward neural networks to render input images. Because of the high
dimension of the output layer, these networks require much memory for
computation. Therefore, for high-resolution images, most mobile devices and
personal computers cannot stylize them, which greatly limits the application
scenarios of Fast Style Transfer. At present, the two existing solutions are
purchasing more memory and using the feathering-based method, but the former
requires additional cost, and the latter has poor image quality. To solve this
problem, we propose a novel image synthesis method named \emph{block shuffle},
which converts a single task with high memory consumption to multiple subtasks
with low memory consumption. This method can act as a plug-in for Fast Style
Transfer without any modification to the network architecture. We use the most
popular Fast Style Transfer repository on GitHub as the baseline. Experiments
show that the quality of high-resolution images generated by our method is
better than that of the feathering-based method. Although our method is an
order of magnitude slower than the baseline, it can stylize high-resolution
images with limited memory, which is impossible with the baseline. The code and
models will be made available on \url{https://github.com/czczup/block-shuffle}.
Related papers
- PixelShuffler: A Simple Image Translation Through Pixel Rearrangement [0.0]
Style transfer is a widely researched application of image-to-image translation, where the goal is to synthesize an image that combines the content of one image with the style of another.
Existing state-of-the-art methods often rely on complex neural networks, including diffusion models and language models, to achieve high-quality style transfer.
We propose a novel pixel shuffle method that addresses the image-to-image translation problem generally with a specific demonstrative application in style transfer.
arXiv Detail & Related papers (2024-10-03T22:08:41Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Cache Me if You Can: Accelerating Diffusion Models through Block Caching [67.54820800003375]
A large image-to-image network has to be applied many times to iteratively refine an image from random noise.
We investigate the behavior of the layers within the network and find that 1) the layers' output changes smoothly over time, 2) the layers show distinct patterns of change, and 3) the change from step to step is often very small.
We propose a technique to automatically determine caching schedules based on each block's changes over timesteps.
arXiv Detail & Related papers (2023-12-06T00:51:38Z) - CoordFill: Efficient High-Resolution Image Inpainting via Parameterized
Coordinate Querying [52.91778151771145]
In this paper, we try to break the limitations for the first time thanks to the recent development of continuous implicit representation.
Experiments show that the proposed method achieves real-time performance on the 2048$times$2048 images using a single GTX 2080 Ti GPU.
arXiv Detail & Related papers (2023-03-15T11:13:51Z) - Scaling Painting Style Transfer [10.059627473725508]
Neural style transfer (NST) is a technique that produces an unprecedentedly rich style transfer from a style image to a content image.
This paper presents a solution to solve the original global optimization for ultra-high resolution (UHR) images.
We show that our method produces style transfer of unmatched quality for such high-resolution painting styles.
arXiv Detail & Related papers (2022-12-27T12:03:38Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv Detail & Related papers (2021-04-12T11:53:53Z) - Towards Ultra-Resolution Neural Style Transfer via Thumbnail Instance
Normalization [42.84367334160332]
We present an extremely simple Ultra-Resolution Style Transfer framework, termed URST, to flexibly process arbitrary high-resolution images.
Most of the existing state-of-the-art methods would fall short due to massive memory cost and small stroke size when processing ultra-high resolution images.
arXiv Detail & Related papers (2021-03-22T12:54:01Z) - Spatially-Adaptive Pixelwise Networks for Fast Image Translation [57.359250882770525]
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
We use pixel-wise networks; that is, each pixel is processed independently of others.
Our model is up to 18x faster than state-of-the-art baselines.
arXiv Detail & Related papers (2020-12-05T10:02:03Z) - Real-time Universal Style Transfer on High-resolution Images via
Zero-channel Pruning [74.09149955786367]
ArtNet can achieve universal, real-time, and high-quality style transfer on high-resolution images simultaneously.
By using ArtNet and S2, our method is 2.3 to 107.4 times faster than state-of-the-art approaches.
arXiv Detail & Related papers (2020-06-16T09:50:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.