SHARPIE: A Modular Framework for Reinforcement Learning and Human-AI Interaction Experiments
- URL: http://arxiv.org/abs/2501.19245v2
- Date: Mon, 03 Feb 2025 08:41:43 GMT
- Title: SHARPIE: A Modular Framework for Reinforcement Learning and Human-AI Interaction Experiments
- Authors: Hüseyin Aydın, Kevin Godin-Dubois, Libio Goncalvez Braz, Floris den Hengst, Kim Baraka, Mustafa Mert Çelikok, Andreas Sauter, Shihan Wang, Frans A. Oliehoek,
- Abstract summary: Reinforcement learning (RL) offers a general approach for modeling and training AI agents, including human-AI interaction scenarios.
We propose SHARPIE to address the need for a generic framework to support experiments with RL agents and humans.
Its modular design consists of a versatile wrapper for RL environments and algorithm libraries, a participant-facing web interface, logging utilities, deployment on popular cloud and participant recruitment platforms.
- Score: 12.116766194212524
- License:
- Abstract: Reinforcement learning (RL) offers a general approach for modeling and training AI agents, including human-AI interaction scenarios. In this paper, we propose SHARPIE (Shared Human-AI Reinforcement Learning Platform for Interactive Experiments) to address the need for a generic framework to support experiments with RL agents and humans. Its modular design consists of a versatile wrapper for RL environments and algorithm libraries, a participant-facing web interface, logging utilities, deployment on popular cloud and participant recruitment platforms. It empowers researchers to study a wide variety of research questions related to the interaction between humans and RL agents, including those related to interactive reward specification and learning, learning from human feedback, action delegation, preference elicitation, user-modeling, and human-AI teaming. The platform is based on a generic interface for human-RL interactions that aims to standardize the field of study on RL in human contexts.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Incorporating Human Flexibility through Reward Preferences in Human-AI Teaming [14.250120245287109]
We develop a Human-AI PbRL Cooperation Game, where the RL agent queries the human-in-the-loop to elicit task objective and human's preferences on the joint team behavior.
Under this game formulation, we first introduce the notion of Human Flexibility to evaluate team performance based on if humans prefer to follow a fixed policy or adapt to the RL agent on the fly.
We highlight a special case along these two dimensions, which we call Specified Orchestration, where the human is least flexible and agent has complete access to human policy.
arXiv Detail & Related papers (2023-12-21T20:48:15Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Adaptive action supervision in reinforcement learning from real-world
multi-agent demonstrations [10.174009792409928]
We propose a method for adaptive action supervision in RL from real-world demonstrations in multi-agent scenarios.
In the experiments, using chase-and-escape and football tasks with the different dynamics between the unknown source and target environments, we show that our approach achieved a balance between the generalization and the generalization ability compared with the baselines.
arXiv Detail & Related papers (2023-05-22T13:33:37Z) - DIAMBRA Arena: a New Reinforcement Learning Platform for Research and
Experimentation [91.3755431537592]
This work presents DIAMBRA Arena, a new platform for reinforcement learning research and experimentation.
It features a collection of high-quality environments exposing a Python API fully compliant with OpenAI Gym standard.
They are episodic tasks with discrete actions and observations composed by raw pixels plus additional numerical values.
arXiv Detail & Related papers (2022-10-19T14:39:10Z) - CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement
Learning [85.3987745097806]
offline reinforcement learning can be used to train dialogue agents entirely using static datasets collected from human speakers.
Experiments show that recently developed offline RL methods can be combined with language models to yield realistic dialogue agents.
arXiv Detail & Related papers (2022-04-18T17:43:21Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Using Cognitive Models to Train Warm Start Reinforcement Learning Agents
for Human-Computer Interactions [6.623676799228969]
We propose a novel approach of using cognitive models to pre-train RL agents before they are applied to real users.
We present our general methodological approach, followed by two case studies from our previous and ongoing projects.
arXiv Detail & Related papers (2021-03-10T16:20:02Z) - The AI Arena: A Framework for Distributed Multi-Agent Reinforcement
Learning [0.3437656066916039]
We introduce the AI Arena: a scalable framework with flexible abstractions for distributed multi-agent reinforcement learning.
We show performance gains due to a distributed multi-agent learning approach over commonly-used RL techniques in several different learning environments.
arXiv Detail & Related papers (2021-03-09T22:16:19Z) - Improving Reinforcement Learning with Human Assistance: An Argument for
Human Subject Studies with HIPPO Gym [21.4215863934377]
Reinforcement learning (RL) is a popular machine learning paradigm for game playing, robotics control, and other sequential decision tasks.
This article introduces our new open-source RL framework, the Human Input Parsing Platform for Openai Gym (HIPPO Gym)
arXiv Detail & Related papers (2021-02-02T12:56:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.