Alexa, Let's Work Together: Introducing the First Alexa Prize TaskBot
Challenge on Conversational Task Assistance
- URL: http://arxiv.org/abs/2209.06321v1
- Date: Tue, 13 Sep 2022 22:01:42 GMT
- Title: Alexa, Let's Work Together: Introducing the First Alexa Prize TaskBot
Challenge on Conversational Task Assistance
- Authors: Anna Gottardi, Osman Ipek, Giuseppe Castellucci, Shui Hu, Lavina Vaz,
Yao Lu, Anju Khatri, Anjali Chadha, Desheng Zhang, Sattvik Sahai, Prerna
Dwivedi, Hangjie Shi, Lucy Hu, Andy Huang, Luke Dai, Bofei Yang, Varun
Somani, Pankaj Rajan, Ron Rezac, Michael Johnston, Savanna Stiff, Leslie
Ball, David Carmel, Yang Liu, Dilek Hakkani-Tur, Oleg Rokhlenko, Kate Bland,
Eugene Agichtein, Reza Ghanadan, Yoelle Maarek
- Abstract summary: The Alexa Prize TaskBot challenge builds on the success of the SocialBot challenge by introducing the requirements of interactively assisting humans with real-world tasks.
This paper provides an overview of the TaskBot challenge, describes the infrastructure support provided to the teams with the CoBot Toolkit, and summarizes the approaches the participating teams took to overcome the research challenges.
- Score: 22.3267314621785
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Since its inception in 2016, the Alexa Prize program has enabled hundreds of
university students to explore and compete to develop conversational agents
through the SocialBot Grand Challenge. The goal of the challenge is to build
agents capable of conversing coherently and engagingly with humans on popular
topics for 20 minutes, while achieving an average rating of at least 4.0/5.0.
However, as conversational agents attempt to assist users with increasingly
complex tasks, new conversational AI techniques and evaluation platforms are
needed. The Alexa Prize TaskBot challenge, established in 2021, builds on the
success of the SocialBot challenge by introducing the requirements of
interactively assisting humans with real-world Cooking and Do-It-Yourself
tasks, while making use of both voice and visual modalities. This challenge
requires the TaskBots to identify and understand the user's need, identify and
integrate task and domain knowledge into the interaction, and develop new ways
of engaging the user without distracting them from the task at hand, among
other challenges. This paper provides an overview of the TaskBot challenge,
describes the infrastructure support provided to the teams with the CoBot
Toolkit, and summarizes the approaches the participating teams took to overcome
the research challenges. Finally, it analyzes the performance of the competing
TaskBots during the first year of the competition.
Related papers
- The VoxCeleb Speaker Recognition Challenge: A Retrospective [75.40776645175585]
The VoxCeleb Speaker Recognition Challenges (VoxSRC) were a series of challenges and workshops that ran annually from 2019 to 2023.
The challenges primarily evaluated the tasks of speaker recognition and diarisation under various settings.
We provide a review of these challenges that covers: what they explored; the methods developed by the challenge participants and how these evolved.
arXiv Detail & Related papers (2024-08-27T08:57:31Z) - Alexa, play with robot: Introducing the First Alexa Prize SimBot
Challenge on Embodied AI [26.767216491124447]
This paper describes the SimBot Challenge, a new challenge in which university teams compete to build robot assistants.
We describe the infrastructure and support provided to the teams including Alexa Arena, the simulated environment, and the ML toolkit.
We provide analysis of the performance of the competing SimBots during the competition.
arXiv Detail & Related papers (2023-08-09T20:56:56Z) - Roll Up Your Sleeves: Working with a Collaborative and Engaging
Task-Oriented Dialogue System [28.75059053433368]
TacoBot is a user-centered task-oriented digital assistant.
We aim to deliver a collaborative and engaging dialogue experience.
To enhance the dialogue experience, we explore a series of data augmentation strategies.
arXiv Detail & Related papers (2023-07-29T21:37:24Z) - IGLU 2022: Interactive Grounded Language Understanding in a
Collaborative Environment at NeurIPS 2022 [63.07251290802841]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to develop interactive embodied agents.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community.
arXiv Detail & Related papers (2022-05-27T06:12:48Z) - Miutsu: NTU's TaskBot for the Alexa Prize [24.70443137383939]
This paper introduces Miutsu, National Taiwan University's Alexa Prize TaskBot.
It is designed to assist users in completing tasks requiring multiple steps and decisions in two different domains -- home improvement and cooking.
arXiv Detail & Related papers (2022-05-16T04:56:55Z) - Interactive Grounded Language Understanding in a Collaborative
Environment: IGLU 2021 [58.196738777207315]
We propose emphIGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment.
arXiv Detail & Related papers (2022-05-05T01:20:09Z) - NeurIPS 2021 Competition IGLU: Interactive Grounded Language
Understanding in a Collaborative Environment [71.11505407453072]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL)
arXiv Detail & Related papers (2021-10-13T07:13:44Z) - TEACh: Task-driven Embodied Agents that Chat [14.142543383443032]
We introduce TEACh, a dataset of over 3,000 human--human, interactive dialogues to complete household tasks in simulation.
A Commander with access to oracle information about a task communicates in natural language with a Follower.
We propose three benchmarks using TEACh to study embodied intelligence challenges.
arXiv Detail & Related papers (2021-10-01T17:00:14Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue
Systems (ClariQ) [64.60303062063663]
This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ)
The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020.
arXiv Detail & Related papers (2020-09-23T19:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.