VECA : A Toolkit for Building Virtual Environments to Train and Test
Human-like Agents
- URL: http://arxiv.org/abs/2105.00762v1
- Date: Mon, 3 May 2021 11:42:27 GMT
- Title: VECA : A Toolkit for Building Virtual Environments to Train and Test
Human-like Agents
- Authors: Kwanyoung Park, Hyunseok Oh, Youngki Lee
- Abstract summary: We propose a novel VR-based toolkit, VECA, which enables building fruitful virtual environments to train and test human-like agents.
VECA provides a humanoid agent and an environment manager, enabling the agent to receive rich human-like perception and perform comprehensive interactions.
To motivate VECA, we also provide 24 interactive tasks, which represent (but are not limited to) four essential aspects in early human development.
- Score: 5.366273200529158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building human-like agent, which aims to learn and think like human
intelligence, has long been an important research topic in AI. To train and
test human-like agents, we need an environment that imposes the agent to rich
multimodal perception and allows comprehensive interactions for the agent,
while also easily extensible to develop custom tasks. However, existing
approaches do not support comprehensive interaction with the environment or
lack variety in modalities. Also, most of the approaches are difficult or even
impossible to implement custom tasks. In this paper, we propose a novel
VR-based toolkit, VECA, which enables building fruitful virtual environments to
train and test human-like agents. In particular, VECA provides a humanoid agent
and an environment manager, enabling the agent to receive rich human-like
perception and perform comprehensive interactions. To motivate VECA, we also
provide 24 interactive tasks, which represent (but are not limited to) four
essential aspects in early human development: joint-level locomotion and
control, understanding contexts of objects, multimodal learning, and
multi-agent learning. To show the usefulness of VECA on training and testing
human-like learning agents, we conduct experiments on VECA and show that users
can build challenging tasks for engaging human-like algorithms, and the
features supported by VECA are critical on training human-like agents.
Related papers
- Designing AI Personalities: Enhancing Human-Agent Interaction Through Thoughtful Persona Design [7.610735476681428]
This workshop aims to establish a research community focused on AI agent persona design for various contexts.
We will explore critical aspects of persona design, such as voice, embodiment, and demographics, and their impact on user satisfaction and engagement.
Topics include the design of conversational interfaces, the influence of agent personas on user experience, and approaches for creating contextually appropriate AI agents.
arXiv Detail & Related papers (2024-10-30T06:58:59Z) - A Survey on Complex Tasks for Goal-Directed Interactive Agents [60.53915548970061]
This survey compiles relevant tasks and environments for evaluating goal-directed interactive agents.
An up-to-date compilation of relevant resources can be found on our project website.
arXiv Detail & Related papers (2024-09-27T08:17:53Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - WebArena: A Realistic Web Environment for Building Autonomous Agents [92.3291458543633]
We build an environment for language-guided agents that is highly realistic and reproducible.
We focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains.
We release a set of benchmark tasks focusing on evaluating the functional correctness of task completions.
arXiv Detail & Related papers (2023-07-25T22:59:32Z) - Multi-Agent Interplay in a Competitive Survival Environment [0.0]
This thesis is part of the author's thesis "Multi-Agent Interplay in a Competitive Survival Environment" for the Master's Degree in Artificial Intelligence and Robotics at Sapienza University of Rome, 2022.
arXiv Detail & Related papers (2023-01-19T12:04:03Z) - Creating Multimodal Interactive Agents with Imitation and
Self-Supervised Learning [20.02604302565522]
A common vision from science fiction is that robots will one day inhabit our physical spaces, sense the world as we do, assist our physical labours, and communicate with us through natural language.
Here we study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
We show that imitation learning of human-human interactions in a simulated world, in conjunction with self-supervised learning, is sufficient to produce a multimodal interactive agent, which we call MIA, that successfully interacts with non-adversarial humans 75% of the time.
arXiv Detail & Related papers (2021-12-07T15:17:27Z) - Imitating Interactive Intelligence [24.95842455898523]
We study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
To build agents that can robustly interact with humans, we would ideally train them while they interact with humans.
We use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour.
arXiv Detail & Related papers (2020-12-10T13:55:47Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.