Novelty Accommodating Multi-Agent Planning in High Fidelity Simulated
Open World
- URL: http://arxiv.org/abs/2306.12654v1
- Date: Thu, 22 Jun 2023 03:44:04 GMT
- Title: Novelty Accommodating Multi-Agent Planning in High Fidelity Simulated
Open World
- Authors: James Chao, Wiktor Piotrowski, Mitch Manzanares, Douglas S. Lange
- Abstract summary: Novelty is an unexpected phenomenon that can alter the core characteristics, composition, and dynamics of the environment.
Previous studies show that novelty has catastrophic impact on agent performance.
In this work, we demonstrate that a domain-independent AI agent can be adapted to successfully perform and reason with novelty in realistic high-fidelity simulator of the military domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Autonomous agents acting in real-world environments often need to reason with
unknown novelties interfering with their plan execution. Novelty is an
unexpected phenomenon that can alter the core characteristics, composition, and
dynamics of the environment. Novelty can occur at any time in any sufficiently
complex environment without any prior notice or explanation. Previous studies
show that novelty has catastrophic impact on agent performance. Intelligent
agents reason with an internal model of the world to understand the intricacies
of their environment and to successfully execute their plans. The introduction
of novelty into the environment usually renders their internal model inaccurate
and the generated plans no longer applicable. Novelty is particularly prevalent
in the real world where domain-specific and even predicted novelty-specific
approaches are used to mitigate the novelty's impact. In this work, we
demonstrate that a domain-independent AI agent designed to detect,
characterize, and accommodate novelty in smaller-scope physics-based games such
as Angry Birds and Cartpole can be adapted to successfully perform and reason
with novelty in realistic high-fidelity simulator of the military domain.
Related papers
- SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement [81.30121762971473]
SynWorld is a framework that allows agents to autonomously explore environments, optimize, and enhance their understanding of actions.
Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments.
arXiv Detail & Related papers (2025-04-04T16:10:57Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - AgentGen: Enhancing Planning Abilities for Large Language Model based Agent via Environment and Task Generation [81.32722475387364]
Large Language Model-based agents have garnered significant attention and are becoming increasingly popular.
Planning ability is a crucial component of an LLM-based agent, which generally entails achieving a desired goal from an initial state.
Recent studies have demonstrated that utilizing expert-level trajectory for instruction-tuning LLMs effectively enhances their planning capabilities.
arXiv Detail & Related papers (2024-08-01T17:59:46Z) - Synergising Human-like Responses and Machine Intelligence for Planning in Disaster Response [10.294618771570985]
We propose an attention-based cognitive architecture inspired by Dual Process Theory (DPT)
This framework integrates, in an online fashion, rapid yet (human-like) responses with the slow but optimized planning capabilities of machine intelligence.
arXiv Detail & Related papers (2024-04-15T15:47:08Z) - HAZARD Challenge: Embodied Decision Making in Dynamically Changing
Environments [93.94020724735199]
HAZARD consists of three unexpected disaster scenarios, including fire, flood, and wind.
This benchmark enables us to evaluate autonomous agents' decision-making capabilities across various pipelines.
arXiv Detail & Related papers (2024-01-23T18:59:43Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - A Domain-Independent Agent Architecture for Adaptive Operation in
Evolving Open Worlds [18.805929922009806]
HYDRA is a framework for designing model-based agents operating in mixed discrete-continuous worlds.
It implements a novel meta-reasoning process that enables the agent to monitor its own behavior from a variety of aspects.
The framework has been used to implement novelty-aware agents for three diverse domains.
arXiv Detail & Related papers (2023-06-09T21:54:13Z) - Human in the Loop Novelty Generation [2.320417845168326]
We introduce a new approach to novelty generation that uses abstract models of environments that do not require domain-dependent human guidance to generate novelties.
We describe our Human-in-the-Loop novelty generation process using our open-source novelty generation library to test baseline agents in two domains: Monopoly and VizDoom.
Our results shows the Human-in-the-Loop method enables users to develop, implement, test, and revise novelties within 4 hours for both Monopoly and VizDoom domains.
arXiv Detail & Related papers (2023-06-07T22:30:27Z) - Egocentric Planning for Scalable Embodied Task Achievement [6.870094263016224]
Egocentric Planning is an innovative approach that combines symbolic planning and Object-oriented POMDPs to solve tasks in complex environments.
We evaluated our approach in ALFRED, a simulated environment designed for domestic tasks, and demonstrated its high scalability.
Our method requires reliable perception and the specification or learning of a symbolic description of the preconditions and effects of the agent's actions.
arXiv Detail & Related papers (2023-06-02T06:41:24Z) - NovPhy: A Testbed for Physical Reasoning in Open-world Environments [5.736794130342911]
In the real world, we constantly face novel situations we have not encountered before.
An agent needs to have the ability to function under the impact of novelties in order to properly operate in an open-world physical environment.
We propose a new testbed, NovPhy, that requires an agent to reason about physical scenarios in the presence of novelties.
arXiv Detail & Related papers (2023-03-03T04:59:03Z) - Characterizing Novelty in the Military Domain [0.0]
In operation, a rich environment is likely to present challenges not seen in training sets or accounted for in engineered models.
A program at the Defense Advanced Research Project Agency (DARPA) seeks to develop agents that are robust to novelty.
This capability will be required, before AI has the role envisioned within mission critical environments.
arXiv Detail & Related papers (2023-02-23T20:21:24Z) - Neuro-Symbolic World Models for Adapting to Open World Novelty [9.707805250772129]
We introduce WorldCloner, an end-to-end trainable neuro-symbolic world model for rapid novelty adaptation.
WorldCloner learns an efficient symbolic representation of the pre-novelty environment transitions.
WorldCloner augments the policy learning process using imagination-based adaptation.
arXiv Detail & Related papers (2023-01-16T07:49:12Z) - Evolving Hierarchical Memory-Prediction Machines in Multi-Task
Reinforcement Learning [4.030910640265943]
Behavioural agents must generalize across a variety of environments and objectives over time.
We use genetic programming to evolve highly-generalized agents capable of operating in six unique environments from the control literature.
We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory.
arXiv Detail & Related papers (2021-06-23T21:34:32Z) - A Consciousness-Inspired Planning Agent for Model-Based Reinforcement
Learning [104.3643447579578]
We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state.
The design allows agents to learn to plan effectively, by attending to the relevant objects, leading to better out-of-distribution generalization.
arXiv Detail & Related papers (2021-06-03T19:35:19Z) - Emergent Complexity and Zero-shot Transfer via Unsupervised Environment
Design [121.73425076217471]
We propose Unsupervised Environment Design (UED), where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED)
Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments.
arXiv Detail & Related papers (2020-12-03T17:37:01Z) - Integrating Egocentric Localization for More Realistic Point-Goal
Navigation Agents [90.65480527538723]
We develop point-goal navigation agents that rely on visual estimates of egomotion under noisy action dynamics.
Our agent was the runner-up in the PointNav track of CVPR 2020 Habitat Challenge.
arXiv Detail & Related papers (2020-09-07T16:52:47Z) - Environment Shaping in Reinforcement Learning using State Abstraction [63.444831173608605]
We propose a novel framework of emphenvironment shaping using state abstraction.
Our key idea is to compress the environment's large state space with noisy signals to an abstracted space.
We show that the agent's policy learnt in the shaped environment preserves near-optimal behavior in the original environment.
arXiv Detail & Related papers (2020-06-23T17:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.