NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents
Designed for Open Worlds
- URL: http://arxiv.org/abs/2401.03546v1
- Date: Sun, 7 Jan 2024 17:13:28 GMT
- Title: NovelGym: A Flexible Ecosystem for Hybrid Planning and Learning Agents
Designed for Open Worlds
- Authors: Shivam Goel, Yichen Wei, Panagiotis Lymperopoulos, Matthias Scheutz,
Jivko Sinapov
- Abstract summary: NovelGym is a flexible ecosystem designed to simulate gridworld environments.
It serves as a robust platform for benchmarking reinforcement learning (RL) and hybrid planning and learning agents in open-world contexts.
- Score: 18.53489803464924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI agents leave the lab and venture into the real world as autonomous
vehicles, delivery robots, and cooking robots, it is increasingly necessary to
design and comprehensively evaluate algorithms that tackle the ``open-world''.
To this end, we introduce NovelGym, a flexible and adaptable ecosystem designed
to simulate gridworld environments, serving as a robust platform for
benchmarking reinforcement learning (RL) and hybrid planning and learning
agents in open-world contexts. The modular architecture of NovelGym facilitates
rapid creation and modification of task environments, including multi-agent
scenarios, with multiple environment transformations, thus providing a dynamic
testbed for researchers to develop open-world AI agents.
Related papers
- Agent World Model: Infinity Synthetic Environments for Agentic Reinforcement Learning [62.499592503950026]
Large language model (LLM) have empowered autonomous agents to perform complex tasks that require multi-turn interactions with tools and environments.<n>We propose Agent World Model (AWM), a fully synthetic environment generation pipeline.<n>We scale to 1,000 environments covering everyday scenarios, in which agents can interact with rich toolsets.
arXiv Detail & Related papers (2026-02-10T18:55:41Z) - Breaking Task Impasses Quickly: Adaptive Neuro-Symbolic Learning for Open-World Robotics [0.7614628596146601]
We present a neuro-symbolic framework integrating hierarchical abstractions, task and motion planning (TAMP), and reinforcement learning to enable rapid adaptation in robotics.<n>Our architecture combines symbolic goal-oriented learning and world model-based exploration to facilitate rapid adaptation to environmental changes.
arXiv Detail & Related papers (2026-01-01T17:58:05Z) - Imagined Autocurricula [37.48175026521408]
Training agents to act in embodied environments typically requires vast training data or access to accurate simulation.<n>World models are emerging as an alternative leveraging offline, passively collected data.<n>We propose IMAC, which induces an automatic curriculum over generated worlds.
arXiv Detail & Related papers (2025-09-11T23:55:39Z) - Genie Envisioner: A Unified World Foundation Platform for Robotic Manipulation [65.30763239365928]
We introduce Genie Envisioner (GE), a unified world foundation platform for robotic manipulation.<n>GE integrates policy learning, evaluation, and simulation within a single video-generative framework.
arXiv Detail & Related papers (2025-08-07T17:59:44Z) - CREW-WILDFIRE: Benchmarking Agentic Multi-Agent Collaborations at Scale [4.464959191643012]
We introduce CREW-Wildfire, an open-source benchmark designed to evaluate next-generation multi-agent Agentic AI frameworks.<n> CREW-Wildfire offers procedurally generated wildfire response scenarios featuring large maps, heterogeneous agents, partial observability, dynamics, and long-horizon planning objectives.<n>We implement and evaluate several state-of-the-art LLM-based multi-agent Agentic AI frameworks, uncovering significant performance gaps.
arXiv Detail & Related papers (2025-07-07T16:33:42Z) - SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement [81.30121762971473]
SynWorld is a framework that allows agents to autonomously explore environments, optimize, and enhance their understanding of actions.
Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments.
arXiv Detail & Related papers (2025-04-04T16:10:57Z) - Exploration-Driven Generative Interactive Environments [53.05314852577144]
We focus on using many virtual environments for inexpensive, automatically collected interaction data.
We propose a training framework merely using a random agent in virtual environments.
Our agent is fully independent of environment-specific rewards and thus adapts easily to new environments.
arXiv Detail & Related papers (2025-04-03T12:01:41Z) - LLM-mediated Dynamic Plan Generation with a Multi-Agent Approach [1.0124625066746595]
We propose a method for generating networks capable of adapting to dynamic environments.
The proposed method collects environmental "status," representing conditions and goals, and uses them to generate agents.
These agents are interconnected on the basis of specific conditions, resulting in networks that combine flexibility and generality.
arXiv Detail & Related papers (2025-04-02T11:42:49Z) - Inter-environmental world modeling for continuous and compositional dynamics [7.01176359680407]
We introduce Lie Action, an unsupervised framework that learns continuous latent action representations to simulate across environments.
We demonstrate that WLA can be trained using only video frames and, with minimal or no action labels, can quickly adapt to new environments with novel action sets.
arXiv Detail & Related papers (2025-03-13T00:02:54Z) - Curiosity-Driven Imagination: Discovering Plan Operators and Learning Associated Policies for Open-World Adaptation [7.406934849952094]
Adapting quickly to dynamic, uncertain environments is a major challenge in robotics.
Traditional Task and Motion Planning approaches struggle to cope with unforeseen changes, are data-inefficient when adapting, and do not leverage world models during learning.
We address this issue with a hybrid planning and learning system that integrates two models: a low level neural network based model that learns transitions and drives exploration via an Intrinsic Curiosity Module (ICM)
Our evaluation in a robotic manipulation domain with sequential novelty injections demonstrates that our approach converges faster and outperforms state-of-the-art hybrid methods.
arXiv Detail & Related papers (2025-03-06T20:02:26Z) - Trajectory World Models for Heterogeneous Environments [67.27233466954814]
Heterogeneity in sensors and actuators across environments poses a significant challenge to building large-scale pre-trained world models.
We introduce UniTraj, a unified dataset comprising over one million trajectories from 80 environments, designed to scale data while preserving critical diversity.
We propose TrajWorld, a novel architecture capable of flexibly handling varying sensor and actuator information and capturing environment dynamics in-context.
arXiv Detail & Related papers (2025-02-03T13:59:08Z) - Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics [50.191655141020505]
This work advances model-based reinforcement learning by addressing the challenges of long-horizon prediction, error accumulation, and sim-to-real transfer.
By providing a scalable and robust framework, the introduced methods pave the way for adaptive and efficient robotic systems in real-world applications.
arXiv Detail & Related papers (2025-01-17T10:39:09Z) - Transforming the Hybrid Cloud for Emerging AI Workloads [81.15269563290326]
This white paper envisions transforming hybrid cloud systems to meet the growing complexity of AI workloads.
The proposed framework addresses critical challenges in energy efficiency, performance, and cost-effectiveness.
This joint initiative aims to establish hybrid clouds as secure, efficient, and sustainable platforms.
arXiv Detail & Related papers (2024-11-20T11:57:43Z) - OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization [66.22117723598872]
We introduce an open-source framework designed to facilitate the development of multimodal web agent.
We first train the base model with imitation learning to gain the basic abilities.
We then let the agent explore the open web and collect feedback on its trajectories.
arXiv Detail & Related papers (2024-10-25T15:01:27Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Learning Curricula in Open-Ended Worlds [17.138779075998084]
This thesis develops a class of methods called Unsupervised Environment Design (UED)
Given an environment design space, UED automatically generates an infinite sequence or curriculum of training environments.
The findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness.
arXiv Detail & Related papers (2023-12-03T16:44:00Z) - Arbitrarily Scalable Environment Generators via Neural Cellular Automata [55.150593161240444]
We show that NCA environment generators maintain consistent, regularized patterns regardless of environment size.
Our method scales a single-agent reinforcement learning policy to arbitrarily large environments with similar patterns.
arXiv Detail & Related papers (2023-10-28T07:30:09Z) - Octopus: Embodied Vision-Language Programmer from Environmental Feedback [58.04529328728999]
Embodied vision-language models (VLMs) have achieved substantial progress in multimodal perception and reasoning.
To bridge this gap, we introduce Octopus, an embodied vision-language programmer that uses executable code generation as a medium to connect planning and manipulation.
Octopus is designed to 1) proficiently comprehend an agent's visual and textual task objectives, 2) formulate intricate action sequences, and 3) generate executable code.
arXiv Detail & Related papers (2023-10-12T17:59:58Z) - GriddlyJS: A Web IDE for Reinforcement Learning [7.704064306361941]
We introduce GriddlyJS, a web-based Integrated Development Environment (IDE) based on the Griddly engine.
GriddlyJS allows researchers to visually design and debug arbitrary, complex PCG grid-world environments.
By connecting the RL workflow to the advanced functionality enabled by modern web standards, GriddlyJS allows publishing interactive agent-environment demos.
arXiv Detail & Related papers (2022-07-13T10:26:38Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Modular Procedural Generation for Voxel Maps [2.6811189633660613]
In this paper, we present mcg, an open-source library to facilitate implementing PCG algorithms for voxel-based environments such as Minecraft.
The library is designed with human-machine teaming research in mind, and thus takes a 'top-down' approach to generation.
The benefits of this approach include rapid, scalable, and efficient development of virtual environments, the ability to control the statistics of the environment at a semantic level, and the ability to generate novel environments in response to player actions in real time.
arXiv Detail & Related papers (2021-04-18T16:21:35Z) - A Framework for Learning Predator-prey Agents from Simulation to Real
World [0.0]
We propose an evolutionary predatorprey robot system which can be generally implemented from simulation to the real world.
Both the predators and prey are co-evolved by NeuroEvolution of Augmenting Topologies (NEAT) to learn the expected behaviours.
For the convenience of users, the source code and videos of the simulated and real world are published on Github.
arXiv Detail & Related papers (2020-10-29T17:33:38Z) - The Chef's Hat Simulation Environment for Reinforcement-Learning-Based
Agents [54.63186041942257]
We propose a virtual simulation environment that implements the Chef's Hat card game, designed to be used in Human-Robot Interaction scenarios.
This paper provides a controllable and reproducible scenario for reinforcement-learning algorithms.
arXiv Detail & Related papers (2020-03-12T15:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.