BlackBox Toolkit: Intelligent Assistance to UI Design
- URL: http://arxiv.org/abs/2004.01949v2
- Date: Tue, 7 Apr 2020 13:30:41 GMT
- Title: BlackBox Toolkit: Intelligent Assistance to UI Design
- Authors: Vinoth Pandian Sermuga Pandian, Sarah Suleri
- Abstract summary: We propose to modify the UI design process by assisting it with artificial intelligence (AI)
We propose to enable AI to perform repetitive tasks for the designer while allowing the designer to take command of the creative process.
- Score: 9.749560288448114
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: User Interface (UI) design is an creative process that involves considerable
reiteration and rework. Designers go through multiple iterations of different
prototyping fidelities to create a UI design. In this research, we propose to
modify the UI design process by assisting it with artificial intelligence (AI).
We propose to enable AI to perform repetitive tasks for the designer while
allowing the designer to take command of the creative process. This approach
makes the machine act as a black box that intelligently assists the designers
in creating UI design. We believe this approach would greatly benefit designers
in co-creating design solutions with AI.
Related papers
- Empowering Clients: Transformation of Design Processes Due to Generative AI [1.4003044924094596]
The study reveals that AI can disrupt the ideation phase by enabling clients to engage in the design process through rapid visualization of their own ideas.
Our study shows that while AI can provide valuable feedback on designs, it might fail to generate such designs, allowing for interesting connections to foundations in computer science.
Our study also reveals that there is uncertainty among architects about the interpretative sovereignty of architecture and loss of meaning and identity when AI increasingly takes over authorship in the design process.
arXiv Detail & Related papers (2024-11-22T16:48:15Z) - On AI-Inspired UI-Design [5.969881132928718]
We discuss three major complementary approaches on how to use Artificial Intelligence (AI) to support app designers create better, more diverse, and creative UI of mobile apps.
First, designers can prompt a Large Language Model (LLM) like GPT to directly generate and adjust one or multiple UIs.
Second, a Vision-Language Model (VLM) enables designers to effectively search a large screenshot dataset, e.g. from apps published in app stores.
Third, a Diffusion Model (DM) specifically designed to generate app UIs as inspirational images.
arXiv Detail & Related papers (2024-06-19T15:28:21Z) - Automatic Layout Planning for Visually-Rich Documents with Instruction-Following Models [81.6240188672294]
In graphic design, non-professional users often struggle to create visually appealing layouts due to limited skills and resources.
We introduce a novel multimodal instruction-following framework for layout planning, allowing users to easily arrange visual elements into tailored layouts.
Our method not only simplifies the design process for non-professionals but also surpasses the performance of few-shot GPT-4V models, with mIoU higher by 12% on Crello.
arXiv Detail & Related papers (2024-04-23T17:58:33Z) - I-Design: Personalized LLM Interior Designer [57.00412237555167]
I-Design is a personalized interior designer that allows users to generate and visualize their design goals through natural language communication.
I-Design starts with a team of large language model agents that engage in dialogues and logical reasoning with one another.
The final design is then constructed in 3D by retrieving and integrating assets from an existing object database.
arXiv Detail & Related papers (2024-04-03T16:17:53Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - PromptInfuser: How Tightly Coupling AI and UI Design Impacts Designers'
Workflows [23.386764579779538]
We investigate how coupling prompt and UI design affects designers' AI iteration.
Grounding this research, we developed PromptInfuser, a Figma plugin that enables users to create mockups.
In a study with 14 designers, we compare PromptInfuser to designers' current AI-prototyping workflow.
arXiv Detail & Related papers (2023-10-24T01:04:27Z) - Exploring Challenges and Opportunities to Support Designers in Learning
to Co-create with AI-based Manufacturing Design Tools [31.685493295306387]
AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks.
These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as "co-creators"
To date, we know little about how engineering designers learn to work with AI-based design tools.
arXiv Detail & Related papers (2023-03-01T02:57:05Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z) - Toward AI Assistants That Let Designers Design [0.0]
AI for supporting designers needs to be rethought. It should aim to cooperate, not automate, by supporting and leveraging the creativity and problem-solving of designers.
We present AI-assisted design: a framework for creating such AI, built around generative user models which enable reasoning about designers' goals, reasoning, and capabilities.
arXiv Detail & Related papers (2021-07-22T10:29:36Z) - VINS: Visual Search for Mobile User Interface Design [66.28088601689069]
This paper introduces VINS, a visual search framework, that takes as input a UI image and retrieves visually similar design examples.
The framework achieves a mean Average Precision of 76.39% for the UI detection and high performance in querying similar UI designs.
arXiv Detail & Related papers (2021-02-10T01:46:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.