The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents
- URL: http://arxiv.org/abs/2003.05861v1
- Date: Thu, 12 Mar 2020 15:52:49 GMT
- Title: The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents
- Authors: Pablo Barros, Anne C. Bloem, Inge M. Hootsmans, Lena M. Opheij, Romain
H.A. Toebosch, Emilia Barakova and Alessandra Sciutti
- Abstract summary: We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
- Score: 54.63186041942257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To achieve social interactions within Human-Robot Interaction (HRI)
environments is a very challenging task. Most of the current research focuses
on Wizard-of-Oz approaches, which neglect the recent development of intelligent
robots. On the other hand, real-world scenarios usually do not provide the
necessary control and reproducibility which are needed for learning algorithms.
In this paper, we propose a virtual simulation environment that implements the
Chef's Hat card game, designed to be used in HRI scenarios, to provide a
controllable and reproducible scenario for reinforcement-learning algorithms.
Related papers
- Evaluating Real-World Robot Manipulation Policies in Simulation [91.55267186958892]
Control and visual disparities between real and simulated environments are key challenges for reliable simulated evaluation.
We propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments.
We create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups.
arXiv Detail & Related papers (2024-05-09T17:30:16Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Hybrid ASR for Resource-Constrained Robots: HMM - Deep Learning Fusion [0.0]
This paper presents a novel hybrid Automatic Speech Recognition (ASR) system designed specifically for resource-constrained robots.
The proposed approach combines Hidden Markov Models (HMMs) with deep learning models and leverages socket programming to distribute processing tasks effectively.
In this architecture, the HMM-based processing takes place within the robot, while a separate PC handles the deep learning model.
arXiv Detail & Related papers (2023-09-11T15:28:19Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces [8.692025477306212]
Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
arXiv Detail & Related papers (2021-07-04T15:42:01Z) - Zero-Shot Reinforcement Learning on Graphs for Autonomous Exploration
Under Uncertainty [6.42522897323111]
We present a framework for self-learning a high-performance exploration policy in a single simulation environment.
We propose a novel approach that uses graph neural networks in conjunction with deep reinforcement learning.
arXiv Detail & Related papers (2021-05-11T02:42:17Z) - Modular Procedural Generation for Voxel Maps [2.6811189633660613]
In this paper, we present mcg, an open-source library to facilitate implementing PCG algorithms for voxel-based environments such as Minecraft.
The library is designed with human-machine teaming research in mind, and thus takes a 'top-down' approach to generation.
The benefits of this approach include rapid, scalable, and efficient development of virtual environments, the ability to control the statistics of the environment at a semantic level, and the ability to generate novel environments in response to player actions in real time.
arXiv Detail & Related papers (2021-04-18T16:21:35Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.