Generating Embroidery Patterns Using Image-to-Image Translation
- URL: http://arxiv.org/abs/2003.02909v1
- Date: Thu, 5 Mar 2020 20:32:40 GMT
- Title: Generating Embroidery Patterns Using Image-to-Image Translation
- Authors: Mohammad Akif Beg and Jia Yuan Yu
- Abstract summary: We propose two machine learning techniques to solve the embroidery image-to-image translation.
Our goal is to generate a preview image which looks similar to an embroidered image, from a user-uploaded image.
Empirical results show that these techniques successfully generate an approximate preview of an embroidered version of a user image.
- Score: 2.055949720959582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many scenarios in computer vision, machine learning, and computer
graphics, there is a requirement to learn the mapping from an image of one
domain to an image of another domain, called Image-to-image translation. For
example, style transfer, object transfiguration, visually altering the
appearance of weather conditions in an image, changing the appearance of a day
image into a night image or vice versa, photo enhancement, to name a few. In
this paper, we propose two machine learning techniques to solve the embroidery
image-to-image translation. Our goal is to generate a preview image which looks
similar to an embroidered image, from a user-uploaded image. Our techniques are
modifications of two existing techniques, neural style transfer, and
cycle-consistent generative-adversarial network. Neural style transfer renders
the semantic content of an image from one domain in the style of a different
image in another domain, whereas a cycle-consistent generative adversarial
network learns the mapping from an input image to output image without any
paired training data, and also learn a loss function to train this mapping.
Furthermore, the techniques we propose are independent of any embroidery
attributes, such as elevation of the image, light-source, start, and endpoints
of a stitch, type of stitch used, fabric type, etc. Given the user image, our
techniques can generate a preview image which looks similar to an embroidered
image. We train and test our propose techniques on an embroidery dataset which
consist of simple 2D images. To do so, we prepare an unpaired embroidery
dataset with more than 8000 user-uploaded images along with embroidered images.
Empirical results show that these techniques successfully generate an
approximate preview of an embroidered version of a user image, which can help
users in decision making.
Related papers
- Conditional Diffusion on Web-Scale Image Pairs leads to Diverse Image Variations [32.892042877725125]
Current image variation techniques involve adapting a text-to-image model to reconstruct an input image conditioned on the same image.
We show that a diffusion model trained to reconstruct an input image from frozen embeddings, can reconstruct the image with minor variations.
We propose a new pretraining strategy to generate image variations using a large collection of image pairs.
arXiv Detail & Related papers (2024-05-23T17:58:03Z) - Artistic Arbitrary Style Transfer [1.1279808969568252]
Arbitrary Style Transfer is a technique used to produce a new image from two images: a content image, and a style image.
Balancing the structure and style components has been the major challenge that other state-of-the-art algorithms have tried to solve.
In this work, we solved these problems by using a Deep Learning approach using Convolutional Neural Networks.
arXiv Detail & Related papers (2022-12-21T21:34:00Z) - cGANs for Cartoon to Real-life Images [0.4724825031148411]
The project aims to evaluate the robustness of the Pix2Pix model by applying it to datasets consisting of cartoonized images.
It should be possible to train the network to generate real-life images from the cartoonized images.
arXiv Detail & Related papers (2021-01-24T20:26:31Z) - Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation [59.73535607392732]
Image to image translation aims to learn a mapping that transforms an image from one visual domain to another.
We propose the use of an image retrieval system to assist the image-to-image translation task.
arXiv Detail & Related papers (2020-08-11T20:11:53Z) - Text as Neural Operator: Image Manipulation by Text Instruction [68.53181621741632]
In this paper, we study a setting that allows users to edit an image with multiple objects using complex text instructions to add, remove, or change the objects.
The inputs of the task are multimodal including (1) a reference image and (2) an instruction in natural language that describes desired modifications to the image.
We show that the proposed model performs favorably against recent strong baselines on three public datasets.
arXiv Detail & Related papers (2020-08-11T07:07:10Z) - Contrastive Learning for Unpaired Image-to-Image Translation [64.47477071705866]
In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain.
We propose a framework based on contrastive learning to maximize mutual information between the two.
We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time.
arXiv Detail & Related papers (2020-07-30T17:59:58Z) - Domain Adaptation for Image Dehazing [72.15994735131835]
Most existing methods train a dehazing model on synthetic hazy images, which are less able to generalize well to real hazy images due to domain shift.
We propose a domain adaptation paradigm, which consists of an image translation module and two image dehazing modules.
Experimental results on both synthetic and real-world images demonstrate that our model performs favorably against the state-of-the-art dehazing algorithms.
arXiv Detail & Related papers (2020-05-10T13:54:56Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - Structural-analogy from a Single Image Pair [118.61885732829117]
In this paper, we explore the capabilities of neural networks to understand image structure given only a single pair of images, A and B.
We generate an image that keeps the appearance and style of B, but has a structural arrangement that corresponds to A.
Our method can be used to generate high quality imagery in other conditional generation tasks utilizing images A and B only.
arXiv Detail & Related papers (2020-04-05T14:51:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.