Inkspire: Supporting Design Exploration with Generative AI through Analogical Sketching
- URL: http://arxiv.org/abs/2501.18588v1
- Date: Thu, 30 Jan 2025 18:59:04 GMT
- Title: Inkspire: Supporting Design Exploration with Generative AI through Analogical Sketching
- Authors: David Chuan-En Lin, Hyeonsu B. Kang, Nikolas Martelaro, Aniket Kittur, Yan-Ying Chen, Matthew K. Hong,
- Abstract summary: Inkspire is a sketch-driven tool that supports designers in prototyping product design concepts.
In a study comparing Inkspire to ControlNet, we found that Inkspire supported designers with more inspiration and exploration of design ideas.
- Score: 16.33879333386818
- License:
- Abstract: With recent advancements in the capabilities of Text-to-Image (T2I) AI models, product designers have begun experimenting with them in their work. However, T2I models struggle to interpret abstract language and the current user experience of T2I tools can induce design fixation rather than a more iterative, exploratory process. To address these challenges, we developed Inkspire, a sketch-driven tool that supports designers in prototyping product design concepts with analogical inspirations and a complete sketch-to-design-to-sketch feedback loop. To inform the design of Inkspire, we conducted an exchange session with designers and distilled design goals for improving T2I interactions. In a within-subjects study comparing Inkspire to ControlNet, we found that Inkspire supported designers with more inspiration and exploration of design ideas, and improved aspects of the co-creative process by allowing designers to effectively grasp the current state of the AI to guide it towards novel design intentions.
Related papers
- Empowering Clients: Transformation of Design Processes Due to Generative AI [1.4003044924094596]
The study reveals that AI can disrupt the ideation phase by enabling clients to engage in the design process through rapid visualization of their own ideas.
Our study shows that while AI can provide valuable feedback on designs, it might fail to generate such designs, allowing for interesting connections to foundations in computer science.
Our study also reveals that there is uncertainty among architects about the interpretative sovereignty of architecture and loss of meaning and identity when AI increasingly takes over authorship in the design process.
arXiv Detail & Related papers (2024-11-22T16:48:15Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Inspired by AI? A Novel Generative AI System To Assist Conceptual Automotive Design [6.001793288867721]
Design inspiration is crucial for establishing the direction of a design as well as evoking feelings and conveying meanings during the conceptual design process.
Many practice designers use text-based searches on platforms like Pinterest to gather image ideas, followed by sketching on paper or using digital tools to develop concepts.
Emerging generative AI techniques, such as diffusion models, offer a promising avenue to streamline these processes by swiftly generating design concepts based on text and image inspiration inputs.
arXiv Detail & Related papers (2024-06-06T17:04:14Z) - Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models [81.6240188672294]
In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources.
We introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts.
Our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12% on Crello.
arXiv Detail & Related papers (2024-04-23T17:58:33Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - DesignGPT: Multi-Agent Collaboration in Design [4.6272626111555955]
DesignGPT uses artificial intelligence agents to simulate the roles of different positions in the design company and allows human designers to collaborate with them in natural language.
Experimental results show that compared with separate AI tools, DesignGPT improves the performance of designers.
arXiv Detail & Related papers (2023-11-20T08:05:52Z) - DEsignBench: Exploring and Benchmarking DALL-E 3 for Imagining Visual
Design [124.56730013968543]
We introduce DEsignBench, a text-to-image (T2I) generation benchmark tailored for visual design scenarios.
For DEsignBench benchmarking, we perform human evaluations on generated images against the criteria of image-text alignment, visual aesthetic, and design creativity.
In addition to human evaluations, we introduce the first automatic image generation evaluator powered by GPT-4V.
arXiv Detail & Related papers (2023-10-23T17:48:38Z) - Experiments on Generative AI-Powered Parametric Modeling and BIM for
Architectural Design [4.710049212041078]
The study experiments with the potential of ChatGPT and generative AI in 3D architectural design.
The framework provides architects with an intuitive and powerful method to convey design intent.
arXiv Detail & Related papers (2023-08-01T01:51:59Z) - Evaluation of Sketch-Based and Semantic-Based Modalities for Mockup
Generation [15.838427479984926]
Design mockups are essential instruments for visualizing and testing design ideas.
We present and evaluate two different modalities for generating mockups based on hand-drawn sketches.
Our results show that sketch-based generation was more intuitive and expressive, while semantic-based generative AI obtained better results in terms of quality and fidelity.
arXiv Detail & Related papers (2023-03-22T16:47:36Z) - Design Space Exploration and Explanation via Conditional Variational
Autoencoders in Meta-model-based Conceptual Design of Pedestrian Bridges [52.77024349608834]
This paper provides a performance-driven design exploration framework to augment the human designer through a Conditional Variational Autoencoder (CVAE)
The CVAE is trained on 18'000 synthetically generated instances of a pedestrian bridge in Switzerland.
arXiv Detail & Related papers (2022-11-29T17:28:31Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.