BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description
for Benchmarking Embodied AI Agents
- URL: http://arxiv.org/abs/2206.06489v1
- Date: Mon, 13 Jun 2022 21:37:31 GMT
- Title: BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description
for Benchmarking Embodied AI Agents
- Authors: Ziang Liu, Roberto Mart\'in-Mart\'in, Fei Xia, Jiajun Wu, Li Fei-Fei
- Abstract summary: We bring a subset of BEHAVIOR activities into Habitat 2.0 to benefit from its fast simulation speed.
Inspired by the catalyzing effect that benchmarks have played in the AI fields, the community is looking for new benchmarks for embodied AI.
- Score: 31.499374840833124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots excel in performing repetitive and precision-sensitive tasks in
controlled environments such as warehouses and factories, but have not been yet
extended to embodied AI agents providing assistance in household tasks.
Inspired by the catalyzing effect that benchmarks have played in the AI fields
such as computer vision and natural language processing, the community is
looking for new benchmarks for embodied AI. Prior work in embodied AI benchmark
defines tasks using a different formalism, often specific to one environment,
simulator or domain, making it hard to develop general and comparable
solutions. In this work, we bring a subset of BEHAVIOR activities into Habitat
2.0 to benefit from its fast simulation speed, as a first step towards
demonstrating the ease of adapting activities defined in the logic space into
different simulators.
Related papers
- BEHAVIOR-1K: A Human-Centered, Embodied AI Benchmark with 1,000 Everyday Activities and Realistic Simulation [63.42591251500825]
We present BEHAVIOR-1K, a comprehensive simulation benchmark for human-centered robotics.
The first is the definition of 1,000 everyday activities grounded in 50 scenes with more than 9,000 objects annotated with rich physical and semantic properties.
The second is OMNIGIBSON, a novel simulation environment that supports these activities via realistic physics simulation and rendering of rigid bodies, deformable bodies, and liquids.
arXiv Detail & Related papers (2024-03-14T09:48:36Z) - Learning to navigate efficiently and precisely in real environments [14.52507964172957]
Embodied AI literature focuses on end-to-end agents trained in simulators like Habitat or AI-Thor.
In this work we explore end-to-end training of agents in simulation in settings which minimize the sim2real gap.
arXiv Detail & Related papers (2024-01-25T17:50:05Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - CorNav: Autonomous Agent with Self-Corrected Planning for Zero-Shot Vision-and-Language Navigation [73.78984332354636]
CorNav is a novel zero-shot framework for vision-and-language navigation.
It incorporates environmental feedback for refining future plans and adjusting its actions.
It consistently outperforms all baselines in a zero-shot multi-task setting.
arXiv Detail & Related papers (2023-06-17T11:44:04Z) - Language-Conditioned Imitation Learning with Base Skill Priors under Unstructured Data [26.004807291215258]
Language-conditioned robot manipulation aims to develop robots capable of understanding and executing complex tasks.
We propose a general-purpose, language-conditioned approach that combines base skill priors and imitation learning under unstructured data.
We assess our model's performance in both simulated and real-world environments using a zero-shot setting.
arXiv Detail & Related papers (2023-05-30T14:40:38Z) - ProcTHOR: Large-Scale Embodied AI Using Procedural Generation [55.485985317538194]
ProcTHOR is a framework for procedural generation of Embodied AI environments.
We demonstrate state-of-the-art results across 6 embodied AI benchmarks for navigation, rearrangement, and arm manipulation.
arXiv Detail & Related papers (2022-06-14T17:09:35Z) - An in-depth experimental study of sensor usage and visual reasoning of
robots navigating in real environments [20.105395754497202]
We study the performance and reasoning capacities of real physical agents, trained in simulation and deployed to two different physical environments.
We show, that for the PointGoal task, an agent pre-trained on wide variety of tasks and fine-tuned on a simulated version of the target environment can reach competitive performance without modelling any sim2real transfer.
arXiv Detail & Related papers (2021-11-29T16:27:29Z) - BEHAVIOR: Benchmark for Everyday Household Activities in Virtual,
Interactive, and Ecological Environments [70.18430114842094]
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation.
These activities are designed to be realistic, diverse, and complex.
We include 500 human demonstrations in virtual reality (VR) to serve as the human ground truth.
arXiv Detail & Related papers (2021-08-06T23:36:23Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.