Covering the News with (AI) Style
- URL: http://arxiv.org/abs/2002.02369v1
- Date: Sun, 5 Jan 2020 22:57:51 GMT
- Title: Covering the News with (AI) Style
- Authors: Michele Merler, Cicero Nogueira dos Santos, Mauro Martino, Alfio M.
Gliozzo, John R. Smith
- Abstract summary: We introduce a multi-modal discriminative and generative frame-work capable of assisting humans in producing visual content re-lated to a given theme.
Motivated by a request from the The New York Times (NYT) seeking help to use AI to create art for their special section on Artificial Intelligence, we demonstrated the application of our system in producing such image.
- Score: 2.3043762032257895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a multi-modal discriminative and generative frame-work capable
of assisting humans in producing visual content re-lated to a given theme,
starting from a collection of documents(textual, visual, or both). This
framework can be used by edit or to generate images for articles, as well as
books or music album covers. Motivated by a request from the The New York Times
(NYT) seeking help to use AI to create art for their special section on
Artificial Intelligence, we demonstrated the application of our system in
producing such image.
Related papers
- A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models [117.77807994397784]
Image editing aims to edit the given synthetic or real image to meet the specific requirements from users.
Recent significant advancement in this field is based on the development of text-to-image (T2I) diffusion models.
T2I-based image editing methods significantly enhance editing performance and offer a user-friendly interface for modifying content guided by multimodal inputs.
arXiv Detail & Related papers (2024-06-20T17:58:52Z) - The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking [47.08666835021915]
We present a systematic attempt at understanding and detecting AI-generated images (AI-art) in adversarial scenarios.
The dataset, named ARIA, contains over 140K images in five categories: artworks (painting), social media images, news photos, disaster scenes, and anime pictures.
arXiv Detail & Related papers (2024-04-22T21:00:13Z) - Generating Print-Ready Personalized AI Art Products from Minimal User Inputs [0.9003384937161055]
We present a novel framework to advance generative artificial intelligence (AI) applications in the realm of printed art products.
The framework consists of a pipeline that addresses two major challenges in the domain: the high complexity of generating effective prompts, and the low native resolution of images produced by diffusion models.
Our work represents a significant step towards democratizing high-quality AI art, opening new avenues for consumers, artists, designers, and businesses.
arXiv Detail & Related papers (2024-03-28T18:48:19Z) - Decoupled Textual Embeddings for Customized Image Generation [62.98933630971543]
Customized text-to-image generation aims to learn user-specified concepts with a few images.
Existing methods usually suffer from overfitting issues and entangle the subject-unrelated information with the learned concept.
We propose the DETEX, a novel approach that learns the disentangled concept embedding for flexible customized text-to-image generation.
arXiv Detail & Related papers (2023-12-19T03:32:10Z) - Interactive Neural Painting [66.9376011879115]
This paper proposes the first approach for Interactive Neural Painting (NP)
We propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder.
Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art.
arXiv Detail & Related papers (2023-07-31T07:02:00Z) - AI-Generated Imagery: A New Era for the `Readymade' [0.7386189738262202]
This paper aims to examine how digital images produced by generative AI systems have come to be so regularly referred to as art.
We employ existing philosophical frameworks and theories of language to suggest that some AI-generated imagery can be presented as readymades' for consideration as art.
arXiv Detail & Related papers (2023-07-12T09:25:56Z) - Visualize Before You Write: Imagination-Guided Open-Ended Text
Generation [68.96699389728964]
We propose iNLG that uses machine-generated images to guide language models in open-ended text generation.
Experiments and analyses demonstrate the effectiveness of iNLG on open-ended text generation tasks.
arXiv Detail & Related papers (2022-10-07T18:01:09Z) - A Taxonomy of Prompt Modifiers for Text-To-Image Generation [6.903929927172919]
This paper identifies six types of prompt modifier used by practitioners in the online community based on a 3-month ethnography study.
The novel taxonomy of prompt modifier provides researchers a conceptual starting point for investigating the practice of text-to-image generation.
We discuss research opportunities of this novel creative practice in the field of Human-Computer Interaction.
arXiv Detail & Related papers (2022-04-20T06:15:50Z) - Automatic Image Content Extraction: Operationalizing Machine Learning in
Humanistic Photographic Studies of Large Visual Archives [81.88384269259706]
We introduce Automatic Image Content Extraction framework for machine learning-based search and analysis of large image archives.
The proposed framework can be applied in several domains in humanities and social sciences.
arXiv Detail & Related papers (2022-04-05T12:19:24Z) - A Framework and Dataset for Abstract Art Generation via CalligraphyGAN [0.0]
We present a creative framework based on Conditional Generative Adversarial Networks and Contextual Neural Language Model to generate abstract artworks.
Our work is inspired by Chinese calligraphy, which is a unique form of visual art where the character itself is an aesthetic painting.
arXiv Detail & Related papers (2020-12-02T16:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.