Self-Initiated Open World Learning for Autonomous AI Agents
- URL: http://arxiv.org/abs/2110.11385v3
- Date: Thu, 29 Feb 2024 04:50:25 GMT
- Title: Self-Initiated Open World Learning for Autonomous AI Agents
- Authors: Bing Liu, Eric Robertson, Scott Grigsby, Sahisnu Mazumder
- Abstract summary: As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous.
This paper proposes a theoretic framework for this learning paradigm to promote the research of building Self-initiated Open world Learning agents.
- Score: 16.41396764793912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As more and more AI agents are used in practice, it is time to think about
how to make these agents fully autonomous so that they can learn by themselves
in a self-motivated and self-supervised manner rather than being retrained
periodically on the initiation of human engineers using expanded training data.
As the real-world is an open environment with unknowns or novelties, detecting
novelties or unknowns, characterizing them, accommodating or adapting to them,
gathering ground-truth training data, and incrementally learning the
unknowns/novelties are critical to making the agent more and more knowledgeable
and powerful over time. The key challenge is how to automate the process so
that it is carried out on the agent's own initiative and through its own
interactions with humans and the environment. Since an AI agent usually has a
performance task, characterizing each novelty becomes critical and necessary so
that the agent can formulate an appropriate response to adapt its behavior to
accommodate the novelty and to learn from it to improve the agent's adaptation
capability and task performance. The process goes continually without
termination. This paper proposes a theoretic framework for this learning
paradigm to promote the research of building Self-initiated Open world Learning
(SOL) agents. An example SOL agent is also described.
Related papers
- Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - Building Artificial Intelligence with Creative Agency and Self-hood [0.0]
This paper is an invited layperson summary for The Academic of the paper referenced on the last page.
We summarize how the formal framework of autocatalytic networks offers a means of modeling the origins of self-organizing, self-sustaining structures.
arXiv Detail & Related papers (2024-06-09T22:28:11Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Efficient Open-world Reinforcement Learning via Knowledge Distillation
and Autonomous Rule Discovery [5.680463564655267]
Rule-driven deep Q-learning agent (RDQ) as one possible implementation of framework.
We show that RDQ successfully extracts task-specific rules as it interacts with the world.
In experiments, we show that the RDQ agent is significantly more resilient to the novelties than the baseline agents.
arXiv Detail & Related papers (2023-11-24T04:12:50Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - AI Autonomy : Self-Initiated Open-World Continual Learning and
Adaptation [16.96197233523911]
This paper proposes a framework for the research of building autonomous and continual learning enabled AI agents.
The key challenge is how to automate the process so that it is carried out continually on the agent's own initiative.
arXiv Detail & Related papers (2022-03-17T00:07:02Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - Emergent Social Learning via Multi-agent Reinforcement Learning [91.57176641192771]
Social learning is a key component of human and animal intelligence.
This paper investigates whether independent reinforcement learning agents can learn to use social learning to improve their performance.
arXiv Detail & Related papers (2020-10-01T17:54:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.