AutoPoster: A Highly Automatic and Content-aware Design System for
Advertising Poster Generation
- URL: http://arxiv.org/abs/2308.01095v2
- Date: Wed, 23 Aug 2023 06:26:56 GMT
- Title: AutoPoster: A Highly Automatic and Content-aware Design System for
Advertising Poster Generation
- Authors: Jinpeng Lin, Min Zhou, Ye Ma, Yifan Gao, Chenxi Fei, Yangjian Chen,
Zhang Yu, Tiezheng Ge
- Abstract summary: This paper introduces AutoPoster, a highly automatic and content-aware system for generating advertising posters.
With only product images and titles as inputs, AutoPoster can automatically produce posters of varying sizes through four key stages.
We propose the first poster generation dataset that includes visual attribute annotations for over 76k posters.
- Score: 14.20790443380675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advertising posters, a form of information presentation, combine visual and
linguistic modalities. Creating a poster involves multiple steps and
necessitates design experience and creativity. This paper introduces
AutoPoster, a highly automatic and content-aware system for generating
advertising posters. With only product images and titles as inputs, AutoPoster
can automatically produce posters of varying sizes through four key stages:
image cleaning and retargeting, layout generation, tagline generation, and
style attribute prediction. To ensure visual harmony of posters, two
content-aware models are incorporated for layout and tagline generation.
Moreover, we propose a novel multi-task Style Attribute Predictor (SAP) to
jointly predict visual style attributes. Meanwhile, to our knowledge, we
propose the first poster generation dataset that includes visual attribute
annotations for over 76k posters. Qualitative and quantitative outcomes from
user studies and experiments substantiate the efficacy of our system and the
aesthetic superiority of the generated posters compared to other poster
generation methods.
Related papers
- MPDS: A Movie Posters Dataset for Image Generation with Diffusion Model [26.361736240401594]
Movie posters are vital for captivating audiences, conveying themes, and driving market competition in the film industry.
Despite exciting progress in image generation, current models often fall short in producing satisfactory poster results.
We propose a Movie Posters DataSet (MPDS), tailored for text-to-image generation models to revolutionize poster production.
arXiv Detail & Related papers (2024-10-22T09:20:03Z) - ImPoster: Text and Frequency Guidance for Subject Driven Action Personalization using Diffusion Models [55.43801602995778]
We present ImPoster, a novel algorithm for generating a target image of a'source' subject performing a 'driving' action.
Our approach is completely unsupervised and does not require any access to additional annotations like keypoints or pose.
arXiv Detail & Related papers (2024-09-24T01:25:19Z) - GlyphDraw2: Automatic Generation of Complex Glyph Posters with Diffusion Models and Large Language Models [7.5791485306093245]
We propose an automatic poster generation framework with text rendering capabilities leveraging LLMs.
This framework aims to create precise poster text within a detailed contextual background.
We introduce a high-resolution font dataset and a poster dataset with resolutions exceeding 1024 pixels.
arXiv Detail & Related papers (2024-07-02T13:17:49Z) - PosterLLaVa: Constructing a Unified Multi-modal Layout Generator with LLM [58.67882997399021]
Our research introduces a unified framework for automated graphic layout generation.
Our data-driven method employs structured text (JSON format) and visual instruction tuning to generate layouts.
We conduct extensive experiments and achieved state-of-the-art (SOTA) performance on public multi-modal layout generation benchmarks.
arXiv Detail & Related papers (2024-06-05T03:05:52Z) - Planning and Rendering: Towards Product Poster Generation with Diffusion Models [21.45855580640437]
We propose a novel product poster generation framework based on diffusion models named P&R.
At the planning stage, we propose a PlanNet to generate the layout of the product and other visual components.
At the rendering stage, we propose a RenderNet to generate the background for the product while considering the generated layout.
Our method outperforms the state-of-the-art product poster generation methods on PPG30k.
arXiv Detail & Related papers (2023-12-14T11:11:50Z) - TextPainter: Multimodal Text Image Generation with Visual-harmony and
Text-comprehension for Poster Design [50.8682912032406]
This study introduces TextPainter, a novel multimodal approach to generate text images.
TextPainter takes the global-local background image as a hint of style and guides the text image generation with visual harmony.
We construct the PosterT80K dataset, consisting of about 80K posters annotated with sentence-level bounding boxes and text contents.
arXiv Detail & Related papers (2023-08-09T06:59:29Z) - ProSpect: Prompt Spectrum for Attribute-Aware Personalization of
Diffusion Models [77.03361270726944]
Current personalization methods can invert an object or concept into the textual conditioning space and compose new natural sentences for text-to-image diffusion models.
We propose a novel approach that leverages the step-by-step generation process of diffusion models, which generate images from low to high frequency information.
We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout.
arXiv Detail & Related papers (2023-05-25T16:32:01Z) - PosterLayout: A New Benchmark and Approach for Content-aware
Visual-Textual Presentation Layout [62.12447593298437]
Content-aware visual-textual presentation layout aims at arranging spatial space on the given canvas for pre-defined elements.
We propose design sequence formation (DSF) that reorganizes elements in layouts to imitate the design processes of human designers.
A novel CNN-LSTM-based conditional generative adversarial network (GAN) is presented to generate proper layouts.
arXiv Detail & Related papers (2023-03-28T12:48:36Z) - Unsupervised Domain Adaption with Pixel-level Discriminator for
Image-aware Layout Generation [24.625282719753915]
This paper focuses on using the GAN-based model conditioned on image contents to generate advertising poster graphic layouts.
It combines unsupervised domain techniques to design a GAN with a novel pixel-level discriminator (PD), called PDA-GAN, to generate graphic layouts according to image contents.
Both quantitative and qualitative evaluations demonstrate that PDA-GAN can achieve state-of-the-art performances.
arXiv Detail & Related papers (2023-03-25T06:50:22Z) - Text2Poster: Laying out Stylized Texts on Retrieved Images [32.466518932018175]
Poster generation is a significant task for a wide range of applications, which is often time-consuming and requires lots of manual editing and artistic experience.
We propose a novel data-driven framework, called textitText2Poster, to automatically generate visually-effective posters from textual information.
arXiv Detail & Related papers (2023-01-06T04:06:23Z) - Improving Generation and Evaluation of Visual Stories via Semantic
Consistency [72.00815192668193]
Given a series of natural language captions, an agent must generate a sequence of images that correspond to the captions.
Prior work has introduced recurrent generative models which outperform synthesis text-to-image models on this task.
We present a number of improvements to prior modeling approaches, including the addition of a dual learning framework.
arXiv Detail & Related papers (2021-05-20T20:42:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.