iCONTRA: Toward Thematic Collection Design Via Interactive Concept
Transfer
- URL: http://arxiv.org/abs/2403.08746v1
- Date: Wed, 13 Mar 2024 17:48:39 GMT
- Title: iCONTRA: Toward Thematic Collection Design Via Interactive Concept
Transfer
- Authors: Dinh-Khoi Vo, Duy-Nam Ly, Khanh-Duy Le, Tam V. Nguyen, Minh-Triet
Tran, Trung-Nghia Le
- Abstract summary: We introduce iCONTRA, an interactive CONcept TRAnsfer system.
iCONTRA enables both experienced designers and novices to effortlessly explore creative design concepts.
We also propose a zero-shot image editing algorithm, eliminating the need for fine-tuning models.
- Score: 16.35842298296878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Creating thematic collections in industries demands innovative designs and
cohesive concepts. Designers may face challenges in maintaining thematic
consistency when drawing inspiration from existing objects, landscapes, or
artifacts. While AI-powered graphic design tools offer help, they often fail to
generate cohesive sets based on specific thematic concepts. In response, we
introduce iCONTRA, an interactive CONcept TRAnsfer system. With a user-friendly
interface, iCONTRA enables both experienced designers and novices to
effortlessly explore creative design concepts and efficiently generate thematic
collections. We also propose a zero-shot image editing algorithm, eliminating
the need for fine-tuning models, which gradually integrates information from
initial objects, ensuring consistency in the generation process without
influencing the background. A pilot study suggests iCONTRA's potential to
reduce designers' efforts. Experimental results demonstrate its effectiveness
in producing consistent and high-quality object concept transfers. iCONTRA
stands as a promising tool for innovation and creative exploration in thematic
collection design. The source code will be available at:
https://github.com/vdkhoi20/iCONTRA.
Related papers
- Empowering Clients: Transformation of Design Processes Due to Generative AI [1.4003044924094596]
The study reveals that AI can disrupt the ideation phase by enabling clients to engage in the design process through rapid visualization of their own ideas.
Our study shows that while AI can provide valuable feedback on designs, it might fail to generate such designs, allowing for interesting connections to foundations in computer science.
Our study also reveals that there is uncertainty among architects about the interpretative sovereignty of architecture and loss of meaning and identity when AI increasingly takes over authorship in the design process.
arXiv Detail & Related papers (2024-11-22T16:48:15Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - PartCraft: Crafting Creative Objects by Parts [128.30514851911218]
This paper propels creative control in generative visual AI by allowing users to "select"
We for the first time allow users to choose visual concepts by parts for their creative endeavors.
Fine-grained generation that precisely captures selected visual concepts.
arXiv Detail & Related papers (2024-07-05T15:53:04Z) - Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Imagining a Future of Designing with AI: Dynamic Grounding, Constructive
Negotiation, and Sustainable Motivation [13.850610205757633]
We aim to isolate the new value large AI models can provide design compared to past technologies.
We arrive at three affordances that summarize latent qualities of natural language-enabled foundation models.
Our design process, terminology, and diagrams aim to contribute to future discussions about the relative affordances of AI technology.
arXiv Detail & Related papers (2024-02-12T00:20:43Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - CreativeGAN: Editing Generative Adversarial Networks for Creative Design
Synthesis [1.933681537640272]
This paper proposes an automated method, named CreativeGAN, for generating novel designs.
It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components.
Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and rare novelties to a broad set of designs.
arXiv Detail & Related papers (2021-03-10T18:22:35Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.