Unlocking Adaptive User Experience with Generative AI
- URL: http://arxiv.org/abs/2404.05442v1
- Date: Mon, 8 Apr 2024 12:22:39 GMT
- Title: Unlocking Adaptive User Experience with Generative AI
- Authors: Yutan Huang, Tanjila Kanij, Anuradha Madugalla, Shruti Mahajan, Chetan Arora, John Grundy,
- Abstract summary: We develop user personas and adaptive interfaces using both ChatGPT and a traditional manual process.
To obtain data for the personas we collected data from 37 survey participants and 4 interviews.
The comparison of ChatGPT generated content and manual content indicates promising results.
- Score: 8.578448990789965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing user-centred applications that address diverse user needs requires rigorous user research. This is time, effort and cost-consuming. With the recent rise of generative AI techniques based on Large Language Models (LLMs), there is a possibility that these powerful tools can be used to develop adaptive interfaces. This paper presents a novel approach to develop user personas and adaptive interface candidates for a specific domain using ChatGPT. We develop user personas and adaptive interfaces using both ChatGPT and a traditional manual process and compare these outcomes. To obtain data for the personas we collected data from 37 survey participants and 4 interviews in collaboration with a not-for-profit organisation. The comparison of ChatGPT generated content and manual content indicates promising results that encourage using LLMs in the adaptive interfaces design process.
Related papers
- Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User [117.82681846559909]
Conversational recommendation systems (CRSs) use multi-turn interaction to capture user preferences and provide personalized recommendations.
We propose a generative reward model based simulated user, named GRSU, for automatic interaction with CRSs.
arXiv Detail & Related papers (2025-04-29T06:37:30Z) - Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale [51.9706400130481]
Large Language Models (LLMs) have emerged as personalized assistants for users across a wide range of tasks.
PERSONAMEM features curated user profiles with over 180 simulated user-LLM interaction histories.
We evaluate LLM chatbots' ability to identify the most suitable response according to the current state of the user's profile.
arXiv Detail & Related papers (2025-04-19T08:16:10Z) - Adaptive and Accessible User Interfaces for Seniors Through Model-Driven Engineering [4.220379425971002]
AdaptForge is a novel model-driven engineering (MDE)-based approach to support sophisticated adaptations of Flutter app user interfaces and behaviour.
We explain how AdaptForge employs Domain-Specific Languages to capture seniors' context-of-use scenarios.
We report on evaluations conducted with real-world Flutter developers to demonstrate the promise and practical applicability of AdaptForge.
arXiv Detail & Related papers (2025-02-26T05:03:22Z) - Dynamic User Interface Generation for Enhanced Human-Computer Interaction Using Variational Autoencoders [4.1676654279172265]
This study presents a novel approach for intelligent user interaction interface generation and optimization, grounded in the variational autoencoder (VAE) model.
The VAE-based approach significantly enhances the quality and precision of interface generation compared to other methods, including autoencoders (AE), generative adversarial networks (GAN), conditional GANs (cGAN), deep belief networks (DBN), and VAE-GAN.
arXiv Detail & Related papers (2024-12-19T04:37:47Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting [3.3241053483599563]
ORE has primarily relied on manual methods, such as interviews and collaborative forums, to gather user requirements from domain experts.
Current OntoChat offers a framework for ORE that utilise large language models (LLMs) to streamline the process.
This study produces pre-defined prompt templates based on user queries, focusing on creating and refining personas, goals, scenarios, sample data, and data resources for user stories.
arXiv Detail & Related papers (2024-08-09T19:21:14Z) - Interlinking User Stories and GUI Prototyping: A Semi-Automatic LLM-based Approach [55.762798168494726]
We present a novel Large Language Model (LLM)-based approach for validating the implementation of functional NL-based requirements in a graphical user interface (GUI) prototype.
Our approach aims to detect functional user stories that are not implemented in a GUI prototype and provides recommendations for suitable GUI components directly implementing the requirements.
arXiv Detail & Related papers (2024-06-12T11:59:26Z) - Reinforcement Learning-Based Framework for the Intelligent Adaptation of User Interfaces [0.0]
Adapting the user interface (UI) of software systems to meet the needs and preferences of users is a complex task.
Recent advances in Machine Learning (ML) techniques may provide effective means to support the adaptation process.
In this paper, we instantiate a reference framework for Intelligent User Interface Adaptation by using Reinforcement Learning (RL) as the ML component.
arXiv Detail & Related papers (2024-05-15T11:14:33Z) - Enhanced User Interaction in Operating Systems through Machine Learning
Language Models [17.09116903102371]
This paper explores the potential applications of large language models, machine learning and interaction design for user interaction in recommendation systems and operating systems.
The combination of interaction design and machine learning can provide a more efficient and personalized user experience for products and services.
arXiv Detail & Related papers (2024-02-24T12:17:06Z) - Interpreting User Requests in the Context of Natural Language Standing
Instructions [89.12540932734476]
We develop NLSI, a language-to-program dataset consisting of over 2.4K dialogues spanning 17 domains.
A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue.
arXiv Detail & Related papers (2023-11-16T11:19:26Z) - A Comparative Study on Reward Models for UI Adaptation with
Reinforcement Learning [0.6899744489931015]
Reinforcement learning can be used to personalise interfaces for each context of use.
determining the reward of each adaptation alternative is a challenge in RL for UI adaptation.
Recent research has explored the use of reward models to address this challenge, but there is currently no empirical evidence on this type of model.
arXiv Detail & Related papers (2023-08-26T18:31:16Z) - Interactive Text Generation [75.23894005664533]
We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users.
We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts.
arXiv Detail & Related papers (2023-03-02T01:57:17Z) - X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback [83.95599156217945]
We focus on assistive typing applications in which a user cannot operate a keyboard, but can supply other inputs.
Standard methods train a model on a fixed dataset of user inputs, then deploy a static interface that does not learn from its mistakes.
We investigate a simple idea that would enable such interfaces to improve over time, with minimal additional effort from the user.
arXiv Detail & Related papers (2022-03-04T00:07:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.