CreativeGAN: Editing Generative Adversarial Networks for Creative Design
Synthesis
- URL: http://arxiv.org/abs/2103.06242v1
- Date: Wed, 10 Mar 2021 18:22:35 GMT
- Title: CreativeGAN: Editing Generative Adversarial Networks for Creative Design
Synthesis
- Authors: Amin Heyrani Nobari, Muhammad Fathy Rashad, Faez Ahmed
- Abstract summary: This paper proposes an automated method, named CreativeGAN, for generating novel designs.
It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components.
Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and rare novelties to a broad set of designs.
- Score: 1.933681537640272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern machine learning techniques, such as deep neural networks, are
transforming many disciplines ranging from image recognition to language
understanding, by uncovering patterns in big data and making accurate
predictions. They have also shown promising results for synthesizing new
designs, which is crucial for creating products and enabling innovation.
Generative models, including generative adversarial networks (GANs), have
proven to be effective for design synthesis with applications ranging from
product design to metamaterial design. These automated computational design
methods can support human designers, who typically create designs by a
time-consuming process of iteratively exploring ideas using experience and
heuristics. However, there are still challenges remaining in automatically
synthesizing `creative' designs. GAN models, however, are not capable of
generating unique designs, a key to innovation and a major gap in AI-based
design automation applications. This paper proposes an automated method, named
CreativeGAN, for generating novel designs. It does so by identifying components
that make a design unique and modifying a GAN model such that it becomes more
likely to generate designs with identified unique components. The method
combines state-of-art novelty detection, segmentation, novelty localization,
rewriting, and generative models for creative design synthesis. Using a dataset
of bicycle designs, we demonstrate that the method can create new bicycle
designs with unique frames and handles, and generalize rare novelties to a
broad set of designs. Our automated method requires no human intervention and
demonstrates a way to rethink creative design synthesis and exploration.
Related papers
- LLM2FEA: Discover Novel Designs with Generative Evolutionary Multitasking [21.237950330178354]
We propose the first attempt to discover novel designs in generative models by transferring knowledge across multiple domains.
By utilizing a multi-factorial evolutionary algorithm (MFEA) to drive a large language model, LLM2FEA integrates knowledge from various fields to generate prompts that guide the generative model in discovering novel and practical objects.
arXiv Detail & Related papers (2024-06-21T07:20:51Z) - Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - CreativeSynth: Creative Blending and Synthesis of Visual Arts based on
Multimodal Diffusion [74.44273919041912]
Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.
However, adapting these models for artistic image editing presents two significant challenges.
We build the innovative unified framework Creative Synth, which is based on a diffusion model with the ability to coordinate multimodal inputs.
arXiv Detail & Related papers (2024-01-25T10:42:09Z) - Neural Markov Prolog [57.13568543360899]
We propose the language Neural Markov Prolog (NMP) as a means to bridge first order logic and neural network design.
NMP allows for the easy generation and presentation of architectures for images, text, relational databases, or other target data types.
arXiv Detail & Related papers (2023-11-27T21:41:47Z) - DreamCreature: Crafting Photorealistic Virtual Creatures from
Imagination [140.1641573781066]
We introduce a novel task, Virtual Creatures Generation: Given a set of unlabeled images of the target concepts, we aim to train a T2I model capable of creating new, hybrid concepts.
We propose a new method called DreamCreature, which identifies and extracts the underlying sub-concepts.
The T2I thus adapts to generate novel concepts with faithful structures and photorealistic appearance.
arXiv Detail & Related papers (2023-11-27T01:24:31Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - Diatom-inspired architected materials using language-based deep
learning: Perception, transformation and manufacturing [0.0]
We report novel biologically inspired designs of diatom structures, enabled using transformer neural networks.
We illustrate a series of novel diatom-based designs and also report a manufactured specimen, created using additive manufacturing.
arXiv Detail & Related papers (2023-01-14T10:02:51Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Towards creativity characterization of generative models via group-based
subset scanning [51.84144826134919]
We propose group-based subset scanning to quantify, detect, and characterize creative processes.
Creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2021-04-01T14:07:49Z) - Garment Design with Generative Adversarial Networks [7.640010691467089]
This paper explores the capabilities of generative adversarial networks (GAN) for automated attribute-level editing of design concepts.
The experiments support the hypothesized potentials of GAN for attribute-level editing of design concepts.
arXiv Detail & Related papers (2020-07-21T17:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.