Interleaving GANs with knowledge graphs to support design creativity for
book covers
- URL: http://arxiv.org/abs/2308.01626v1
- Date: Thu, 3 Aug 2023 08:56:56 GMT
- Title: Interleaving GANs with knowledge graphs to support design creativity for
book covers
- Authors: Alexandru Motogna, Adrian Groza
- Abstract summary: We apply Generative Adversarial Networks (GANs) to the book covers domain.
We interleave GANs with knowledge graphs to alter the input title to obtain multiple possible options for any given title.
Finally, we use the discriminator obtained during the training phase to select the best images generated with new titles.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: An attractive book cover is important for the success of a book. In this
paper, we apply Generative Adversarial Networks (GANs) to the book covers
domain, using different methods for training in order to obtain better
generated images. We interleave GANs with knowledge graphs to alter the input
title to obtain multiple possible options for any given title, which are then
used as an augmented input to the generator. Finally, we use the discriminator
obtained during the training phase to select the best images generated with new
titles. Our method performed better at generating book covers than previous
attempts, and the knowledge graph gives better options to the book author or
editor compared to using GANs alone.
Related papers
- Neural Cover Selection for Image Steganography [7.7961128660417325]
In steganography, selecting an optimal cover image, referred to as cover selection, is pivotal for effective message concealment.
Inspired by recent advancements in generative models, we introduce a novel cover selection framework.
Our method shows significant advantages in message recovery and image quality.
arXiv Detail & Related papers (2024-10-23T18:32:34Z) - Conditional Score Guidance for Text-Driven Image-to-Image Translation [52.73564644268749]
We present a novel algorithm for text-driven image-to-image translation based on a pretrained text-to-image diffusion model.
Our method aims to generate a target image by selectively editing the regions of interest in a source image.
arXiv Detail & Related papers (2023-05-29T10:48:34Z) - Book Cover Synthesis from the Summary [0.0]
We explore ways to produce a book cover using artificial intelligence based on the fact that there exists a relationship between the summary of the book and its cover.
We construct a dataset of English books that contains a large number of samples of summaries of existing books and their cover images.
We apply different text-to-image synthesis techniques to generate book covers from the summary and exhibit the results in this paper.
arXiv Detail & Related papers (2022-11-03T20:43:40Z) - Towards Diverse and Faithful One-shot Adaption of Generative Adversarial
Networks [54.80435295622583]
One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only.
We present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation.
arXiv Detail & Related papers (2022-07-18T16:29:41Z) - Font Completion and Manipulation by Cycling Between Multi-Modality
Representations [113.26243126754704]
We innovate to explore the generation of font glyphs as 2D graphic objects with the graph as an intermediate representation.
We formulate a cross-modality cycled image-to-image structure with a graph between an image encoder and an image.
Our model generates improved results than both image-to-image baseline and previous state-of-the-art methods for glyph completion.
arXiv Detail & Related papers (2021-08-30T02:43:29Z) - Towards Book Cover Design via Layout Graphs [18.028269880425455]
We propose a generative neural network that can produce book covers based on an easy-to-use layout graph.
The layout graph contains objects such as text, natural scene objects, and solid color spaces.
arXiv Detail & Related papers (2021-05-24T04:28:35Z) - Font Style that Fits an Image -- Font Generation Based on Image Context [7.646713951724013]
We propose a method of generating a book title image based on its context within a book cover.
We propose an end-to-end neural network that inputs the book cover, a target location mask, and a desired book title and outputs stylized text suitable for the cover.
We demonstrate that the proposed method can effectively produce desirable and appropriate book cover text through quantitative and qualitative results.
arXiv Detail & Related papers (2021-05-19T01:53:04Z) - Directional GAN: A Novel Conditioning Strategy for Generative Networks [0.0]
We propose a simple and novel conditioning strategy which allows generation of images conditioned on given semantic attributes.
Our approach is based on modifying latent vectors, using directional vectors of relevant semantic attributes in latent space.
We show the applicability of our proposed approach, named Directional GAN, on multiple public datasets, with an average accuracy of 86.4%.
arXiv Detail & Related papers (2021-05-12T15:02:41Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Graph Edit Distance Reward: Learning to Edit Scene Graph [69.39048809061714]
We propose a new method to edit the scene graph according to the user instructions, which has never been explored.
To be specific, in order to learn editing scene graphs as the semantics given by texts, we propose a Graph Edit Distance Reward.
In the context of text-editing image retrieval, we validate the effectiveness of our method in CSS and CRIR dataset.
arXiv Detail & Related papers (2020-08-15T04:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.