Learning from Interaction: User Interface Adaptation using Reinforcement
Learning
- URL: http://arxiv.org/abs/2312.07216v1
- Date: Tue, 12 Dec 2023 12:29:18 GMT
- Title: Learning from Interaction: User Interface Adaptation using Reinforcement
Learning
- Authors: Daniel Gaspar-Figueiredo
- Abstract summary: This thesis proposes an RL-based UI adaptation framework that uses physiological data.
The framework aims to learn from user interactions and make informed adaptations to improve user experience (UX)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The continuous adaptation of software systems to meet the evolving needs of
users is very important for enhancing user experience (UX). User interface (UI)
adaptation, which involves adjusting the layout, navigation, and content
presentation based on user preferences and contextual conditions, plays an
important role in achieving this goal. However, suggesting the right adaptation
at the right time and in the right place remains a challenge in order to make
it valuable for the end-user. To tackle this challenge, machine learning
approaches could be used. In particular, we are using Reinforcement Learning
(RL) due to its ability to learn from interaction with the users. In this
approach, the feedback is very important and the use of physiological data
could be benefitial to obtain objective insights into how users are reacting to
the different adaptations. Thus, in this PhD thesis, we propose an RL-based UI
adaptation framework that uses physiological data. The framework aims to learn
from user interactions and make informed adaptations to improve UX. To this
end, our research aims to answer the following questions: Does the use of an
RL-based approach improve UX? How effective is RL in guiding UI adaptation? and
Can physiological data support UI adaptation for enhancing UX? The evaluation
plan involves conducting user studies to evaluate answer these questions. The
empirical evaluation will provide a strong empirical foundation for building,
evaluating, and improving the proposed adaptation framework. The expected
contributions of this research include the development of a novel framework for
intelligent Adaptive UIs, insights into the effectiveness of RL algorithms in
guiding UI adaptation, the integration of physiological data as objective
measures of UX, and empirical validation of the proposed framework's impact on
UX.
Related papers
- Reinforcement Learning-Based Framework for the Intelligent Adaptation of User Interfaces [0.0]
Adapting the user interface (UI) of software systems to meet the needs and preferences of users is a complex task.
Recent advances in Machine Learning (ML) techniques may provide effective means to support the adaptation process.
In this paper, we instantiate a reference framework for Intelligent User Interface Adaptation by using Reinforcement Learning (RL) as the ML component.
arXiv Detail & Related papers (2024-05-15T11:14:33Z) - Generating User Experience Based on Personas with AI Assistants [0.0]
My research introduces a novel approach of combining Large Language Models and personas.
The research is structured around three areas: (1) a critical review of existing adaptive UX practices and the potential for their automation; (2) an investigation into the role and effectiveness of personas in enhancing UX adaptability; and (3) the proposal of a theoretical framework that leverages LLM capabilities to create more dynamic and responsive UX designs and guidelines.
arXiv Detail & Related papers (2024-05-02T07:03:16Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.
In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.
Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - A Comparative Study on Reward Models for UI Adaptation with
Reinforcement Learning [0.6899744489931015]
Reinforcement learning can be used to personalise interfaces for each context of use.
determining the reward of each adaptation alternative is a challenge in RL for UI adaptation.
Recent research has explored the use of reward models to address this challenge, but there is currently no empirical evidence on this type of model.
arXiv Detail & Related papers (2023-08-26T18:31:16Z) - Computational Adaptation of XR Interfaces Through Interaction Simulation [4.6193503399184275]
We discuss a computational approach to adapt XR interfaces with the goal of improving user experience and performance.
Our novel model, applied to menu selection tasks, simulates user interactions by considering both cognitive and motor costs.
arXiv Detail & Related papers (2022-04-19T23:37:07Z) - Adapting User Interfaces with Model-based Reinforcement Learning [47.469980921522115]
Adapting an interface requires taking into account both the positive and negative effects that changes may have on the user.
We propose a novel approach for adaptive user interfaces that yields a conservative adaptation policy.
arXiv Detail & Related papers (2021-03-11T17:24:34Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z) - Empowering Active Learning to Jointly Optimize System and User Demands [70.66168547821019]
We propose a new active learning approach that jointly optimize the active learning system (training efficiently) and the user (receiving useful instances)
We study our approach in an educational application, which particularly benefits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user.
We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users.
arXiv Detail & Related papers (2020-05-09T16:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.