UI Layout Generation with LLMs Guided by UI Grammar
- URL: http://arxiv.org/abs/2310.15455v1
- Date: Tue, 24 Oct 2023 02:00:12 GMT
- Title: UI Layout Generation with LLMs Guided by UI Grammar
- Authors: Yuwen Lu, Ziang Tong, Qinyi Zhao, Chengzhi Zhang, Toby Jia-Jun Li
- Abstract summary: Large Language Models (LLMs) have stimulated interest among researchers and industry professionals.
This paper proposes the introduction of UI grammar -- a novel approach to represent the hierarchical structure inherent in UI screens.
The aim of this approach is to guide the generative capacities of LLMs more effectively and improve the explainability and controllability of the process.
- Score: 13.172638190095395
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent advances in Large Language Models (LLMs) have stimulated interest
among researchers and industry professionals, particularly in their application
to tasks concerning mobile user interfaces (UIs). This position paper
investigates the use of LLMs for UI layout generation. Central to our
exploration is the introduction of UI grammar -- a novel approach we proposed
to represent the hierarchical structure inherent in UI screens. The aim of this
approach is to guide the generative capacities of LLMs more effectively and
improve the explainability and controllability of the process. Initial
experiments conducted with GPT-4 showed the promising capability of LLMs to
produce high-quality user interfaces via in-context learning. Furthermore, our
preliminary comparative study suggested the potential of the grammar-based
approach in improving the quality of generative results in specific aspects.
Related papers
- Enhancing Reasoning to Adapt Large Language Models for Domain-Specific Applications [4.122613733775677]
SOLOMON is a novel Neuro-inspired Large Language Model (LLM) Reasoning Network architecture.
We show how SOLOMON enables swift adaptation of general-purpose LLMs to specialized tasks by leveraging Prompt Engineering and In-Context Learning techniques.
Results show that SOLOMON instances significantly outperform their baseline LLM counterparts and achieve performance comparable to state-of-the-art reasoning model, o1-preview.
arXiv Detail & Related papers (2025-02-05T19:27:24Z) - Leveraging Multimodal LLM for Inspirational User Interface Search [12.470067381902972]
Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps.
We used a multimodal large language model (MLLM) to extract and interpret semantics from mobile UI images.
Our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience.
arXiv Detail & Related papers (2025-01-29T17:38:39Z) - Vector-ICL: In-context Learning with Continuous Vector Representations [75.96920867382859]
Large language models (LLMs) have shown remarkable in-context learning capabilities on textual data.
We explore whether these capabilities can be extended to continuous vectors from diverse domains, obtained from black-box pretrained encoders.
In particular, we find that pretraining projectors with general language modeling objectives enables Vector-ICL.
arXiv Detail & Related papers (2024-10-08T02:25:38Z) - Making Text Embedders Few-Shot Learners [33.50993377494602]
We introduce a novel model bge-en-icl, which employs few-shot examples to produce high-quality text embeddings.
Our approach integrates task-related examples directly into the query side, resulting in significant improvements across various tasks.
Experimental results on the MTEB and AIR-Bench benchmarks demonstrate that our approach sets new state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2024-09-24T03:30:19Z) - Instruction Finetuning for Leaderboard Generation from Empirical AI Research [0.16114012813668935]
This study demonstrates the application of instruction finetuning of Large Language Models (LLMs) to automate the generation of AI research leaderboards.
It aims to streamline the dissemination of advancements in AI research by transitioning from traditional, manual community curation.
arXiv Detail & Related papers (2024-08-19T16:41:07Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - Large Language Models for the Automated Analysis of Optimization
Algorithms [0.9668407688201361]
We aim to demonstrate the potential of Large Language Models (LLMs) within the realm of optimization algorithms by integrating them into STNWeb.
This is a web-based tool for the generation of Search Trajectory Networks (STNs), which are visualizations of optimization algorithm behavior.
arXiv Detail & Related papers (2024-02-13T14:05:02Z) - Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning [57.74233319453229]
Large language models (LLMs) have emerged as a groundbreaking technology and their unparalleled text generation capabilities have sparked interest in their application to the fundamental sentence representation learning task.
We propose MultiCSR, a multi-level contrastive sentence representation learning framework that decomposes the process of prompting LLMs to generate a corpus.
Our experiments reveal that MultiCSR enables a less advanced LLM to surpass the performance of ChatGPT, while applying it to ChatGPT achieves better state-of-the-art results.
arXiv Detail & Related papers (2023-10-17T03:21:43Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.