LLM2FEA: Discover Novel Designs with Generative Evolutionary Multitasking
- URL: http://arxiv.org/abs/2406.14917v1
- Date: Fri, 21 Jun 2024 07:20:51 GMT
- Title: LLM2FEA: Discover Novel Designs with Generative Evolutionary Multitasking
- Authors: Melvin Wong, Jiao Liu, Thiago Rios, Stefan Menzel, Yew Soon Ong,
- Abstract summary: We propose the first attempt to discover novel designs in generative models by transferring knowledge across multiple domains.
By utilizing a multi-factorial evolutionary algorithm (MFEA) to drive a large language model, LLM2FEA integrates knowledge from various fields to generate prompts that guide the generative model in discovering novel and practical objects.
- Score: 21.237950330178354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid research and development of generative artificial intelligence has enabled the generation of high-quality images, text, and 3D models from text prompts. This advancement impels an inquiry into whether these models can be leveraged to create digital artifacts for both creative and engineering applications. Drawing on innovative designs from other domains may be one answer to this question, much like the historical practice of ``bionics", where humans have sought inspiration from nature's exemplary designs. This raises the intriguing possibility of using generative models to simultaneously tackle design tasks across multiple domains, facilitating cross-domain learning and resulting in a series of innovative design solutions. In this paper, we propose LLM2FEA as the first attempt to discover novel designs in generative models by transferring knowledge across multiple domains. By utilizing a multi-factorial evolutionary algorithm (MFEA) to drive a large language model, LLM2FEA integrates knowledge from various fields to generate prompts that guide the generative model in discovering novel and practical objects. Experimental results in the context of 3D aerodynamic design verify the discovery capabilities of the proposed LLM2FEA. The designs generated by LLM2FEA not only satisfy practicality requirements to a certain degree but also feature novel and aesthetically pleasing shapes, demonstrating the potential applications of LLM2FEA in discovery tasks.
Related papers
- Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond [48.43910061720815]
Multi-modal generative AI has received increasing attention in both academia and industry.
One natural question arises: Is it possible to have a unified model for both understanding and generation?
arXiv Detail & Related papers (2024-09-23T13:16:09Z) - Geometric Deep Learning for Computer-Aided Design: A Survey [85.79012726689511]
This survey offers a comprehensive overview of learning-based methods in computer-aided design.
It includes similarity analysis and retrieval, 2D and 3D CAD model synthesis, and CAD generation from point clouds.
It provides a complete list of benchmark datasets and their characteristics, along with open-source codes that have propelled research in this domain.
arXiv Detail & Related papers (2024-02-27T17:11:35Z) - 3DGEN: A GAN-based approach for generating novel 3D models from image
data [5.767281919406463]
We present 3DGEN, a model that leverages the recent work on both Neural Radiance Fields for object reconstruction and GAN-based image generation.
We show that the proposed architecture can generate plausible meshes for objects of the same category as the training images and compare the resulting meshes with the state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-13T12:24:34Z) - DreamCreature: Crafting Photorealistic Virtual Creatures from
Imagination [140.1641573781066]
We introduce a novel task, Virtual Creatures Generation: Given a set of unlabeled images of the target concepts, we aim to train a T2I model capable of creating new, hybrid concepts.
We propose a new method called DreamCreature, which identifies and extracts the underlying sub-concepts.
The T2I thus adapts to generate novel concepts with faithful structures and photorealistic appearance.
arXiv Detail & Related papers (2023-11-27T01:24:31Z) - Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation [19.62178304006683]
We argue that current interaction paradigms fall short, guiding users towards rapid convergence on a limited set of ideas.
We propose a framework that facilitates the structured generation of design space in which users can seamlessly explore, evaluate, and synthesize a multitude of responses.
arXiv Detail & Related papers (2023-10-19T17:53:14Z) - State of the Art on Diffusion Models for Visual Computing [191.6168813012954]
This report introduces the basic mathematical concepts of diffusion models, implementation details and design choices of the popular Stable Diffusion model.
We also give a comprehensive overview of the rapidly growing literature on diffusion-based generation and editing.
We discuss available datasets, metrics, open challenges, and social implications.
arXiv Detail & Related papers (2023-10-11T05:32:29Z) - Breathing New Life into 3D Assets with Generative Repainting [74.80184575267106]
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators.
Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields.
We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools.
Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, and orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools.
arXiv Detail & Related papers (2023-09-15T16:34:51Z) - MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks [59.09343552273045]
We propose a decoder-only model for multimodal tasks, which is surprisingly effective in jointly learning of these disparate vision-language tasks.
We demonstrate that joint learning of these diverse objectives is simple, effective, and maximizes the weight-sharing of the model across these tasks.
Our model achieves the state of the art on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models.
arXiv Detail & Related papers (2023-03-29T16:42:30Z) - Investigating GANsformer: A Replication Study of a State-of-the-Art
Image Generation Model [0.0]
We reproduce and evaluate a novel variation of the original GAN network, the GANformer.
Due to resources and time limitations, we had to constrain the network's training times, dataset types, and sizes.
arXiv Detail & Related papers (2023-03-15T12:51:16Z) - Challenges in creative generative models for music: a divergence
maximization perspective [3.655021726150369]
Development of generative Machine Learning models in creative practices is raising more interest among artists, practitioners and performers.
Most models are still unable to generate content that lay outside of the domain defined by the training dataset.
We propose an alternative prospective framework, starting from a new general formulation of ML objectives.
arXiv Detail & Related papers (2022-11-16T12:02:43Z) - CreativeGAN: Editing Generative Adversarial Networks for Creative Design
Synthesis [1.933681537640272]
This paper proposes an automated method, named CreativeGAN, for generating novel designs.
It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components.
Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and rare novelties to a broad set of designs.
arXiv Detail & Related papers (2021-03-10T18:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.