Augmenting Character Designers Creativity Using Generative Adversarial
Networks
- URL: http://arxiv.org/abs/2305.18387v1
- Date: Sun, 28 May 2023 10:52:03 GMT
- Title: Augmenting Character Designers Creativity Using Generative Adversarial
Networks
- Authors: Mohammad Lataifeh, Xavier Carrasco, Ashraf Elnagar, Naveed Ahmed
- Abstract summary: Generative Adversarial Networks (GANs) continue to attract the attention of researchers in different fields.
Most recent GANs are focused on realism, however, generating hyper-realistic output is not a priority for some domains.
We present a comparison between different GAN architectures and their performance when trained from scratch on a new visual characters dataset.
We also explore alternative techniques, such as transfer learning and data augmentation, to overcome computational resource limitations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in Generative Adversarial Networks (GANs) continue to attract
the attention of researchers in different fields due to the wide range of
applications devised to take advantage of their key features. Most recent GANs
are focused on realism, however, generating hyper-realistic output is not a
priority for some domains, as in the case of this work. The generated outcomes
are used here as cognitive components to augment character designers creativity
while conceptualizing new characters for different multimedia projects. To
select the best-suited GANs for such a creative context, we first present a
comparison between different GAN architectures and their performance when
trained from scratch on a new visual characters dataset using a single Graphics
Processing Unit. We also explore alternative techniques, such as transfer
learning and data augmentation, to overcome computational resource limitations,
a challenge faced by many researchers in the domain. Additionally, mixed
methods are used to evaluate the cognitive value of the generated visuals on
character designers agency conceptualizing new characters. The results
discussed proved highly effective for this context, as demonstrated by early
adaptations to the characters design process. As an extension for this work,
the presented approach will be further evaluated as a novel co-design process
between humans and machines to investigate where and how the generated concepts
are interacting with and influencing the design process outcome.
Related papers
- A Novel Idea Generation Tool using a Structured Conversational AI (CAI) System [0.0]
This paper presents a novel conversational AI-enabled active ideation interface as a creative idea-generation tool to assist novice designers.
It is a dynamic, interactive, and contextually responsive approach, actively involving a large language model (LLM) from the domain of natural language processing (NLP) in artificial intelligence (AI)
Integrating such AI models with ideation creates what we refer to as an Active Ideation scenario, which helps foster continuous dialogue-based interaction, context-sensitive conversation, and prolific idea generation.
arXiv Detail & Related papers (2024-09-09T16:02:27Z) - A Survey on Personalized Content Synthesis with Diffusion Models [57.01364199734464]
PCS aims to customize the subject of interest to specific user-defined prompts.
Over the past two years, more than 150 methods have been proposed.
This paper offers a comprehensive survey of PCS, with a particular focus on the diffusion models.
arXiv Detail & Related papers (2024-05-09T04:36:04Z) - Collaborative Interactive Evolution of Art in the Latent Space of Deep Generative Models [1.4425878137951238]
We first employ GANs that are trained to produce creative images using an architecture known as Creative Adversarial Networks (CANs)
We then employ an evolutionary approach to navigate within the latent space of the models to discover images.
We use automatic aesthetic and collaborative interactive human evaluation metrics to assess the generated images.
arXiv Detail & Related papers (2024-03-28T17:40:15Z) - Content-Centric Prototyping of Generative AI Applications: Emerging
Approaches and Challenges in Collaborative Software Teams [2.369736515233951]
Our work aims to understand how collaborative software teams set up and apply design guidelines and values, iteratively prototype prompts, and evaluate prompts to achieve desired outcomes.
Our findings reveal a content-centric prototyping approach in which teams begin with the content they want to generate, then identify specific attributes, constraints, and values, and explore methods to give users the ability to influence and interact with those attributes.
arXiv Detail & Related papers (2024-02-27T17:56:10Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Investigating GANsformer: A Replication Study of a State-of-the-Art
Image Generation Model [0.0]
We reproduce and evaluate a novel variation of the original GAN network, the GANformer.
Due to resources and time limitations, we had to constrain the network's training times, dataset types, and sizes.
arXiv Detail & Related papers (2023-03-15T12:51:16Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [82.32018252867277]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.
We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.