Modular Object-Oriented Games: A Task Framework for Reinforcement
Learning, Psychology, and Neuroscience
- URL: http://arxiv.org/abs/2102.12616v1
- Date: Thu, 25 Feb 2021 01:17:03 GMT
- Title: Modular Object-Oriented Games: A Task Framework for Reinforcement
Learning, Psychology, and Neuroscience
- Authors: Nicholas Watters and Joshua Tenenbaum and Mehrdad Jazayeri
- Abstract summary: In recent years, trends towards studying simulated games have gained momentum in the fields of artificial intelligence, cognitive science, psychology, and neuroscience.
Here we introduce Modular Object-Oriented Games, a Python task framework that is lightweight, flexible, customizable, and designed for use by machine learning, psychology, and neurophysiology researchers.
- Score: 0.8594140167290096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, trends towards studying simulated games have gained momentum
in the fields of artificial intelligence, cognitive science, psychology, and
neuroscience. The intersections of these fields have also grown recently, as
researchers increasing study such games using both artificial agents and human
or animal subjects. However, implementing games can be a time-consuming
endeavor and may require a researcher to grapple with complex codebases that
are not easily customized. Furthermore, interdisciplinary researchers studying
some combination of artificial intelligence, human psychology, and animal
neurophysiology face additional challenges, because existing platforms are
designed for only one of these domains. Here we introduce Modular
Object-Oriented Games, a Python task framework that is lightweight, flexible,
customizable, and designed for use by machine learning, psychology, and
neurophysiology researchers.
Related papers
- Deep Dive into Model-free Reinforcement Learning for Biological and Robotic Systems: Theory and Practice [17.598549532513122]
We present a concise exposition of the mathematical and algorithmic aspects of model-free reinforcement learning.
We use textitactor-critic methods as a tool for investigating the feedback control underlying animal and robotic behavior.
arXiv Detail & Related papers (2024-05-19T05:58:44Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Games for Artificial Intelligence Research: A Review and Perspectives [4.44336371847479]
This paper reviews the games and game-based platforms for artificial intelligence research.
It provides guidance on matching particular types of artificial intelligence with suitable games for testing and matching particular needs in games with suitable artificial intelligence techniques.
arXiv Detail & Related papers (2023-04-26T03:42:31Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Human-Level Reinforcement Learning through Theory-Based Modeling,
Exploration, and Planning [27.593497502386143]
Theory-Based Reinforcement Learning uses human-like intuitive theories to explore and model an environment.
We instantiate the approach in a video game playing agent called EMPA.
EMPA matches human learning efficiency on a suite of 90 Atari-style video games.
arXiv Detail & Related papers (2021-07-27T01:38:13Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - From Motor Control to Team Play in Simulated Humanoid Football [56.86144022071756]
We train teams of physically simulated humanoid avatars to play football in a realistic virtual environment.
In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements.
They then acquire mid-level football skills such as dribbling and shooting.
Finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds.
arXiv Detail & Related papers (2021-05-25T20:17:10Z) - Probing artificial neural networks: insights from neuroscience [6.7832320606111125]
Neuroscience has paved the way in using such models through numerous studies conducted in recent decades.
We argue that specific research goals play a paramount role when designing a probe and encourage future probing studies to be explicit in stating these goals.
arXiv Detail & Related papers (2021-04-16T16:13:23Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Reinforcement Learning and its Connections with Neuroscience and
Psychology [0.0]
We review findings in both neuroscience and psychology that evidence reinforcement learning as a promising candidate for modeling learning and decision making in the brain.
We then discuss the implications of this observed relationship between RL, neuroscience and psychology and its role in advancing research in both AI and brain science.
arXiv Detail & Related papers (2020-06-25T04:29:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.