Evaluation of Sketch-Based and Semantic-Based Modalities for Mockup
Generation
- URL: http://arxiv.org/abs/2303.12709v1
- Date: Wed, 22 Mar 2023 16:47:36 GMT
- Title: Evaluation of Sketch-Based and Semantic-Based Modalities for Mockup
Generation
- Authors: Tommaso Cal\`o and Luigi De Russis
- Abstract summary: Design mockups are essential instruments for visualizing and testing design ideas.
We present and evaluate two different modalities for generating mockups based on hand-drawn sketches.
Our results show that sketch-based generation was more intuitive and expressive, while semantic-based generative AI obtained better results in terms of quality and fidelity.
- Score: 15.838427479984926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Design mockups are essential instruments for visualizing and testing design
ideas. However, the process of generating mockups can be time-consuming and
challenging for designers. In this article, we present and evaluate two
different modalities for generating mockup ideas to support designers in their
work: (1) a sketch-based approach to generate mockups based on hand-drawn
sketches, and (2) a semantic-based approach to generate interfaces based on a
set of predefined design elements. To evaluate the effectiveness of these two
approaches, we conducted a series of experiments with 13 participants in which
we asked them to generate mockups using each modality. Our results show that
sketch-based generation was more intuitive and expressive, while semantic-based
generative AI obtained better results in terms of quality and fidelity. Both
methods can be valuable tools for UI designers looking to increase their
creativity and efficiency.
Related papers
- Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis [65.78359025027457]
MetaDesigner revolutionizes artistic typography by leveraging the strengths of Large Language Models (LLMs) to drive a design paradigm centered around user engagement.
A comprehensive feedback mechanism harnesses insights from multimodal models and user evaluations to refine and enhance the design process iteratively.
Empirical validations highlight MetaDesigner's capability to effectively serve diverse WordArt applications, consistently producing aesthetically appealing and context-sensitive results.
arXiv Detail & Related papers (2024-06-28T11:58:26Z) - Human Machine Co-Creation. A Complementary Cognitive Approach to
Creative Character Design Process Using GANs [0.0]
Two neural networks compete to generate new visual contents indistinguishable from the original dataset.
The proposed approach aims to inform the process of perceiving, knowing, and making.
The machine generated concepts are used as a launching platform for character designers to conceptualize new characters.
arXiv Detail & Related papers (2023-11-23T12:18:39Z) - DEsignBench: Exploring and Benchmarking DALL-E 3 for Imagining Visual
Design [124.56730013968543]
We introduce DEsignBench, a text-to-image (T2I) generation benchmark tailored for visual design scenarios.
For DEsignBench benchmarking, we perform human evaluations on generated images against the criteria of image-text alignment, visual aesthetic, and design creativity.
In addition to human evaluations, we introduce the first automatic image generation evaluator powered by GPT-4V.
arXiv Detail & Related papers (2023-10-23T17:48:38Z) - Creating User Interface Mock-ups from High-Level Text Descriptions with
Deep-Learning Models [19.63933191791183]
We introduce three deep-learning techniques to create low-fidelity UI mock-ups from a natural language phrase.
We quantitatively and qualitatively compare and contrast each method's ability in suggesting coherent, diverse and relevant UI design mock-ups.
arXiv Detail & Related papers (2021-10-14T23:48:46Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Unadversarial Examples: Designing Objects for Robust Vision [100.4627585672469]
We develop a framework that exploits the sensitivity of modern machine learning algorithms to input perturbations in order to design "robust objects"
We demonstrate the efficacy of the framework on a wide variety of vision-based tasks ranging from standard benchmarks to (in-simulation) robotics.
arXiv Detail & Related papers (2020-12-22T18:26:07Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z) - Evaluating Mixed-Initiative Procedural Level Design Tools using a
Triple-Blind Mixed-Method User Study [0.0]
A tool which generates levels using interactive evolutionary optimisation was designed for this study.
The tool identifies level design patterns in an initial hand-designed map and uses that information to drive an interactive optimisation algorithm.
A rigorous user study was designed which compared the experiences of designers using the mixed-initiative tool to designers who were given a tool which provided completely random level suggestions.
arXiv Detail & Related papers (2020-05-15T11:40:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.