Unified Distributed Environment
- URL: http://arxiv.org/abs/2205.06946v1
- Date: Sat, 14 May 2022 02:27:35 GMT
- Title: Unified Distributed Environment
- Authors: Woong Gyu La, Sunil Muralidhara, Lingjie Kong, Pratik Nichat
- Abstract summary: UDE is designed to integrate environments built on any simulation platform such as Gazebo, Unity, Unreal, and OpenAI Gym.
UDE enables offloading the environment for execution on a remote machine while still maintaining a unified interface.
- Score: 0.3176327333793051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Unified Distributed Environment (UDE), an environment
virtualization toolkit for reinforcement learning research. UDE is designed to
integrate environments built on any simulation platform such as Gazebo, Unity,
Unreal, and OpenAI Gym. Through environment virtualization, UDE enables
offloading the environment for execution on a remote machine while still
maintaining a unified interface. The UDE interface is designed to support
multi-agent by default. With environment virtualization and its interface
design, the agent policies can be trained in multiple machines for a
multi-agent environment. Furthermore, UDE supports integration with existing
major RL toolkits for researchers to leverage the benefits. This paper
discusses the components of UDE and its design decisions.
Related papers
- OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments [87.41051677852231]
We introduce OSWorld, the first-of-its-kind scalable, real computer environment for multimodal agents.
OSWorld can serve as a unified, integrated computer environment for assessing open-ended computer tasks.
We create a benchmark of 369 computer tasks involving real web and desktop apps in open domains, OS file I/O, and spanning multiple applications.
arXiv Detail & Related papers (2024-04-11T17:56:05Z) - A Hybrid Execution Environment for Computer-Interpretable Guidelines in PROforma [42.15267357325546]
We develop a hybrid execution environment for computer-interpretable guidelines (CIGs) in PROforma.
The proposed environment is part of the CAPABLE system which provides coaching for cancer patients and decision support for physicians.
arXiv Detail & Related papers (2024-04-07T16:32:37Z) - AgentStudio: A Toolkit for Building General Virtual Agents [57.02375267926862]
General virtual agents need to handle multimodal observations, master complex action spaces, and self-improve in dynamic, open-domain environments.
AgentStudio provides a lightweight, interactive environment with highly generic observation and action spaces.
It integrates tools for creating online benchmark tasks, annotating GUI elements, and labeling actions in videos.
Based on our environment and tools, we curate an online task suite that benchmarks both GUI interactions and function calling with efficient auto-evaluation.
arXiv Detail & Related papers (2024-03-26T17:54:15Z) - Enhancing Graph Representation of the Environment through Local and
Cloud Computation [2.9465623430708905]
We propose a graph-based representation that provides a semantic representation of robot environments from multiple sources.
To acquire information from the environment, the framework combines classical computer vision tools with modern computer vision cloud services.
The proposed approach allows us to handle also small objects and integrate them into the semantic representation of the environment.
arXiv Detail & Related papers (2023-09-22T08:05:32Z) - EnvEdit: Environment Editing for Vision-and-Language Navigation [98.30038910061894]
In Vision-and-Language Navigation (VLN), an agent needs to navigate through the environment based on natural language instructions.
We propose EnvEdit, a data augmentation method that creates new environments by editing existing environments.
We show that our proposed EnvEdit method gets significant improvements in all metrics on both pre-trained and non-pre-trained VLN agents.
arXiv Detail & Related papers (2022-03-29T15:44:32Z) - Composing Complex and Hybrid AI Solutions [52.00820391621739]
We describe an extension of the Acumos system towards enabling the above features for general AI applications.
Our extensions include support for more generic components with gRPC/Protobuf interfaces.
We provide examples of deployable solutions and their interfaces.
arXiv Detail & Related papers (2022-02-25T08:57:06Z) - SILG: The Multi-environment Symbolic Interactive Language Grounding
Benchmark [62.34200575624785]
We propose the multi-environment Interactive Language Grounding benchmark (SILG)
SILG consists of grid-world environments that require generalization to new dynamics, entities, and partially observed worlds (RTFM, Messenger, NetHack)
We evaluate recent advances such as egocentric local convolution, recurrent state-tracking, entity-centric attention, and pretrained LM using SILG.
arXiv Detail & Related papers (2021-10-20T17:02:06Z) - SMASH: a Semantic-enabled Multi-agent Approach for Self-adaptation of
Human-centered IoT [0.8602553195689512]
This paper presents SMASH: a multi-agent approach for self-adaptation of IoT applications in human-centered environments.
SMASH agents are provided with a 4-layer architecture based on the BDI agent model that integrates human values with goal-reasoning, planning, and acting.
arXiv Detail & Related papers (2021-05-31T12:33:27Z) - Emergent Complexity and Zero-shot Transfer via Unsupervised Environment
Design [121.73425076217471]
We propose Unsupervised Environment Design (UED), where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED)
Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.
arXiv Detail & Related papers (2020-12-03T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.