Alexa Arena: A User-Centric Interactive Platform for Embodied AI
- URL: http://arxiv.org/abs/2303.01586v2
- Date: Wed, 7 Jun 2023 08:54:46 GMT
- Title: Alexa Arena: A User-Centric Interactive Platform for Embodied AI
- Authors: Qiaozi Gao, Govind Thattai, Suhaila Shakiah, Xiaofeng Gao, Shreyas
Pansare, Vasu Sharma, Gaurav Sukhatme, Hangjie Shi, Bofei Yang, Desheng
Zheng, Lucy Hu, Karthika Arumugam, Shui Hu, Matthew Wen, Dinakar Guthy,
Cadence Chung, Rohan Khanna, Osman Ipek, Leslie Ball, Kate Bland, Heather
Rocker, Yadunandana Rao, Michael Johnston, Reza Ghanadan, Arindam Mandal,
Dilek Hakkani Tur, Prem Natarajan
- Abstract summary: Alexa Arena is a user-centric simulation platform for Embodied AI (EAI) research.
With user-friendly graphics and control mechanisms, Alexa Arena supports the development of gamified robotic tasks.
- Score: 16.58918466932844
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Alexa Arena, a user-centric simulation platform for Embodied AI
(EAI) research. Alexa Arena provides a variety of multi-room layouts and
interactable objects, for the creation of human-robot interaction (HRI)
missions. With user-friendly graphics and control mechanisms, Alexa Arena
supports the development of gamified robotic tasks readily accessible to
general human users, thus opening a new venue for high-efficiency HRI data
collection and EAI system evaluation. Along with the platform, we introduce a
dialog-enabled instruction-following benchmark and provide baseline results for
it. We make Alexa Arena publicly available to facilitate research in building
generalizable and assistive embodied agents.
Related papers
- GRUtopia: Dream General Robots in a City at Scale [65.08318324604116]
This paper introduces project GRUtopia, the first simulated interactive 3D society designed for various robots.
GRScenes includes 100k interactive, finely annotated scenes, which can be freely combined into city-scale environments.
GRResidents is a Large Language Model (LLM) driven Non-Player Character (NPC) system that is responsible for social interaction.
arXiv Detail & Related papers (2024-07-15T17:40:46Z) - Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community [57.56212633174706]
The ability to interact with machines using natural human language is becoming commonplace, but expected.
In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots.
We offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots.
arXiv Detail & Related papers (2024-04-01T15:03:27Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
Playful interactions with AI systems naturally emerged as an important way for users to make sense of the technology.
We target this gap by investigating playful interactions exhibited by users of an emerging AI technology, ChatGPT.
Through a thematic analysis of 372 user-generated posts on the ChatGPT subreddit, we found that more than half of user discourse revolves around playful interactions.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - Human-Machine Teaming for UAVs: An Experimentation Platform [6.809734620480709]
We present the Cogment human-machine teaming experimentation platform.
It implements human-machine teaming (HMT) use cases that can involve learning AI agents, static AI agents, and humans.
We hope to facilitate further research on human-machine teaming in critical systems and defense environments.
arXiv Detail & Related papers (2023-12-18T21:35:02Z) - HoloAssist: an Egocentric Human Interaction Dataset for Interactive AI
Assistants in the Real World [48.90399899928823]
This work is part of a broader research effort to develop intelligent agents that can interactively guide humans through performing tasks in the physical world.
We introduce HoloAssist, a large-scale egocentric human interaction dataset.
We present key insights into how human assistants correct mistakes, intervene in the task completion procedure, and ground their instructions to the environment.
arXiv Detail & Related papers (2023-09-29T07:17:43Z) - WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model [92.90127398282209]
This paper investigates the potential of integrating the most recent Large Language Models (LLMs) and existing visual grounding and robotic grasping system.
We introduce the WALL-E (Embodied Robotic WAiter load lifting with Large Language model) as an example of this integration.
We deploy this LLM-empowered system on the physical robot to provide a more user-friendly interface for the instruction-guided grasping task.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - AI-in-the-Loop -- The impact of HMI in AI-based Application [0.0]
We introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
arXiv Detail & Related papers (2023-03-21T00:04:33Z) - Polycraft World AI Lab (PAL): An Extensible Platform for Evaluating
Artificial Intelligence Agents [0.0]
We present the Polycraft World AI Lab (PAL), a task simulator with an API based on the Minecraft mod Polycraft World.
PAL enables the creation of tasks in a flexible manner as well as having the capability to manipulate any aspect of the task during an evaluation.
In summary, we report a versatile and AI evaluation platform with a low barrier to entry for AI researchers to utilize.
arXiv Detail & Related papers (2023-01-27T18:08:04Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.