Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces
- URL: http://arxiv.org/abs/2504.20782v1
- Date: Tue, 29 Apr 2025 14:00:22 GMT
- Title: Integrating Human Feedback into a Reinforcement Learning-Based Framework for Adaptive User Interfaces
- Authors: Daniel Gaspar-Figueiredo, Marta Fernández-Diego, Silvia Abrahão, Emilio Insfran,
- Abstract summary: Reinforcement Learning (RL) has emerged as a promising approach for addressing complex, sequential adaptation challenges.<n>We enhance a RL-based Adaptive User Interface adaption framework by incorporating personalized human feedback directly into the leaning process.<n>Our approach trains a unique RL agent for each user, allowing individuals to actively shape their personal RL agent's policy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive User Interfaces (AUI) play a crucial role in modern software applications by dynamically adjusting interface elements to accommodate users' diverse and evolving needs. However, existing adaptation strategies often lack real-time responsiveness. Reinforcement Learning (RL) has emerged as a promising approach for addressing complex, sequential adaptation challenges, enabling adaptive systems to learn optimal policies based on previous adaptation experiences. Although RL has been applied to AUIs,integrating RL agents effectively within user interactions remains a challenge. In this paper, we enhance a RL-based Adaptive User Interface adaption framework by incorporating personalized human feedback directly into the leaning process. Unlike prior approaches that rely on a single pre-trained RL model, our approach trains a unique RL agent for each user, allowing individuals to actively shape their personal RL agent's policy, potentially leading to more personalized and responsive UI adaptations. To evaluate this approach, we conducted an empirical study to assess the impact of integrating human feedback into the RL-based Adaptive User Interface adaption framework and its effect on User Experience (UX). The study involved 33 participants interacting with AUIs incorporating human feedback and non-adaptive user interfaces in two domains: an e-learning platform and a trip-planning application. The results suggest that incorporating human feedback into RL-driven adaptations significantly enhances UX, offering promising directions for advancing adaptive capabilities and user-centered design in AUIs.
Related papers
- Search-Based Interaction For Conversation Recommendation via Generative Reward Model Based Simulated User [117.82681846559909]
Conversational recommendation systems (CRSs) use multi-turn interaction to capture user preferences and provide personalized recommendations.<n>We propose a generative reward model based simulated user, named GRSU, for automatic interaction with CRSs.
arXiv Detail & Related papers (2025-04-29T06:37:30Z) - Large Language Model driven Policy Exploration for Recommender Systems [50.70228564385797]
offline RL policies trained on static user data are vulnerable to distribution shift when deployed in dynamic online environments.<n>Online RL-based RS also face challenges in production deployment due to the risks of exposing users to untrained or unstable policies.<n>Large Language Models (LLMs) offer a promising solution to mimic user objectives and preferences for pre-training policies offline.<n>We propose an Interaction-Augmented Learned Policy (iALP) that utilizes user preferences distilled from an LLM.
arXiv Detail & Related papers (2025-01-23T16:37:44Z) - Dynamic User Interface Generation for Enhanced Human-Computer Interaction Using Variational Autoencoders [4.1676654279172265]
This study presents a novel approach for intelligent user interaction interface generation and optimization, grounded in the variational autoencoder (VAE) model.
The VAE-based approach significantly enhances the quality and precision of interface generation compared to other methods, including autoencoders (AE), generative adversarial networks (GAN), conditional GANs (cGAN), deep belief networks (DBN), and VAE-GAN.
arXiv Detail & Related papers (2024-12-19T04:37:47Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Reinforcement Learning-Based Framework for the Intelligent Adaptation of User Interfaces [0.0]
Adapting the user interface (UI) of software systems to meet the needs and preferences of users is a complex task.
Recent advances in Machine Learning (ML) techniques may provide effective means to support the adaptation process.
In this paper, we instantiate a reference framework for Intelligent User Interface Adaptation by using Reinforcement Learning (RL) as the ML component.
arXiv Detail & Related papers (2024-05-15T11:14:33Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Learning from Interaction: User Interface Adaptation using Reinforcement
Learning [0.0]
This thesis proposes an RL-based UI adaptation framework that uses physiological data.
The framework aims to learn from user interactions and make informed adaptations to improve user experience (UX)
arXiv Detail & Related papers (2023-12-12T12:29:18Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - A Comparative Study on Reward Models for UI Adaptation with
Reinforcement Learning [0.6899744489931015]
Reinforcement learning can be used to personalise interfaces for each context of use.
determining the reward of each adaptation alternative is a challenge in RL for UI adaptation.
Recent research has explored the use of reward models to address this challenge, but there is currently no empirical evidence on this type of model.
arXiv Detail & Related papers (2023-08-26T18:31:16Z) - Adapting User Interfaces with Model-based Reinforcement Learning [47.469980921522115]
Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user.
We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy.
arXiv Detail & Related papers (2021-03-11T17:24:34Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.