Self-Initiated Open World Learning for Autonomous AI Agents
- URL: http://arxiv.org/abs/2110.11385v3
- Date: Thu, 29 Feb 2024 04:50:25 GMT
- Title: Self-Initiated Open World Learning for Autonomous AI Agents
- Authors: Bing Liu, Eric Robertson, Scott Grigsby, Sahisnu Mazumder
- Abstract summary: As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous.
This paper proposes a theoretic framework for this learning paradigm to promote the research of building Self-initiated Open world Learning agents.
- Score: 16.41396764793912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As more and more AI agents are used in practice, it is time to think about
how to make these agents fully autonomous so that they can learn by themselves
in a self-motivated and self-supervised manner rather than being retrained
periodically on the initiation of human engineers using expanded training data.
As the real-world is an open environment with unknowns or novelties, detecting
novelties or unknowns, characterizing them, accommodating or adapting to them,
gathering ground-truth training data, and incrementally learning the
unknowns/novelties are critical to making the agent more and more knowledgeable
and powerful over time. The key challenge is how to automate the process so
that it is carried out on the agent's own initiative and through its own
interactions with humans and the environment. Since an AI agent usually has a
performance task, characterizing each novelty becomes critical and necessary so
that the agent can formulate an appropriate response to adapt its behavior to
accommodate the novelty and to learn from it to improve the agent's adaptation
capability and task performance. The process goes continually without
termination. This paper proposes a theoretic framework for this learning
paradigm to promote the research of building Self-initiated Open world Learning
(SOL) agents. An example SOL agent is also described.
Related papers
- Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints Internalization [56.674356045200696]
We propose a novel method to train AI agents to incorporate knowledge and skills for multiple tasks without the need for cumbersome note systems or prior high-quality demonstration data.
Our approach employs an iterative process where the agent collects new experiences, receives corrective feedback from humans in the form of hints, and integrates this feedback into its weights.
We demonstrate the efficacy of our approach by implementing it in a Llama-3-based agent which, after only a few rounds of feedback, outperforms advanced models GPT-4o and DeepSeek-V3 in a taskset.
arXiv Detail & Related papers (2025-02-03T17:45:46Z) - Agents Are Not Enough [16.142735071162765]
autonomous programs that act on behalf of humans are neither new nor exclusive to the mainstream AI movement.
To make the current wave of agents effective and sustainable, we envision an ecosystem that includes Sims, which represent user preferences and behaviors, as well as Assistants, which directly interact with the user and coordinate the execution of user tasks with the help of the agents.
arXiv Detail & Related papers (2024-12-19T16:54:17Z) - Proposer-Agent-Evaluator(PAE): Autonomous Skill Discovery For Foundation Model Internet Agents [64.75036903373712]
Proposer-Agent-Evaluator is a learning system that enables foundation model agents to autonomously discover and practice skills in the wild.
At the heart of PAE is a context-aware task proposer that autonomously proposes tasks for the agent to practice with context information.
The success evaluation serves as the reward signal for the agent to refine its policies through RL.
arXiv Detail & Related papers (2024-12-17T18:59:50Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Efficient Open-world Reinforcement Learning via Knowledge Distillation
and Autonomous Rule Discovery [5.680463564655267]
Rule-driven deep Q-learning agent (RDQ) as one possible implementation of framework.
We show that RDQ successfully extracts task-specific rules as it interacts with the world.
In experiments, we show that the RDQ agent is significantly more resilient to the novelties than the baseline agents.
arXiv Detail & Related papers (2023-11-24T04:12:50Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - AI Autonomy : Self-Initiated Open-World Continual Learning and
Adaptation [16.96197233523911]
This paper proposes a framework for the research of building autonomous and continual learning enabled AI agents.
The key challenge is how to automate the process so that it is carried out continually on the agent's own initiative.
arXiv Detail & Related papers (2022-03-17T00:07:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.