Developmental Curiosity and Social Interaction in Virtual Agents
- URL: http://arxiv.org/abs/2305.13396v1
- Date: Mon, 22 May 2023 18:17:07 GMT
- Title: Developmental Curiosity and Social Interaction in Virtual Agents
- Authors: Chris Doyle, Sarah Shader, Michelle Lau, Megumi Sano, Daniel L. K.
Yamins and Nick Haber
- Abstract summary: We create a virtual infant agent and place it in a developmentally-inspired 3D environment with no external rewards.
We test intrinsic reward functions that are similar to motivations that have been proposed to drive exploration in humans.
We find that learning a world model in the presence of an attentive caregiver helps the infant agent learn how to predict scenarios.
- Score: 2.8894038270224858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Infants explore their complex physical and social environment in an organized
way. To gain insight into what intrinsic motivations may help structure this
exploration, we create a virtual infant agent and place it in a
developmentally-inspired 3D environment with no external rewards. The
environment has a virtual caregiver agent with the capability to interact
contingently with the infant agent in ways that resemble play. We test
intrinsic reward functions that are similar to motivations that have been
proposed to drive exploration in humans: surprise, uncertainty, novelty, and
learning progress. These generic reward functions lead the infant agent to
explore its environment and discover the contingencies that are embedded into
the caregiver agent. The reward functions that are proxies for novelty and
uncertainty are the most successful in generating diverse experiences and
activating the environment contingencies. We also find that learning a world
model in the presence of an attentive caregiver helps the infant agent learn
how to predict scenarios with challenging social and physical dynamics. Taken
together, our findings provide insight into how curiosity-like intrinsic
rewards and contingent social interaction lead to dynamic social behavior and
the creation of a robust predictive world model.
Related papers
- Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Learning Goal-based Movement via Motivational-based Models in Cognitive
Mobile Robots [58.720142291102135]
Humans have needs motivating their behavior according to intensity and context.
We also create preferences associated with each action's perceived pleasure, which is susceptible to changes over time.
This makes decision-making more complex, requiring learning to balance needs and preferences according to the context.
arXiv Detail & Related papers (2023-02-20T04:52:24Z) - From Modelling to Understanding Children's Behaviour in the Context of
Robotics and Social Artificial Intelligence [3.6017760602154576]
This workshop aims to promote a common ground among different disciplines such as developmental sciences, artificial intelligence and social robotics.
We will discuss cutting-edge research in the area of user modelling and adaptive systems for children.
arXiv Detail & Related papers (2022-10-20T10:58:42Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - Development of collective behavior in newborn artificial agents [0.0]
We use deep reinforcement learning and curiosity-driven learning to build newborn artificial agents that develop collective behavior.
Our agents learn collective behavior without external rewards, using only intrinsic motivation (curiosity) to drive learning.
This work bridges the divide between high-dimensional sensory inputs and collective action, resulting in a pixels-to-actions model of collective animal behavior.
arXiv Detail & Related papers (2021-11-06T03:46:31Z) - Self-Supervised Exploration via Latent Bayesian Surprise [4.088019409160893]
In this work, we propose a curiosity-based bonus as intrinsic reward for Reinforcement Learning.
We extensively evaluate our model by measuring the agent's performance in terms of environment exploration.
Our model is cheap and empirically shows state-of-the-art performance on several problems.
arXiv Detail & Related papers (2021-04-15T14:40:16Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Imitating Interactive Intelligence [24.95842455898523]
We study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment.
To build agents that can robustly interact with humans, we would ideally train them while they interact with humans.
We use ideas from inverse reinforcement learning to reduce the disparities between human-human and agent-agent interactive behaviour.
arXiv Detail & Related papers (2020-12-10T13:55:47Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.