Animal-AI 3: What's New & Why You Should Care
- URL: http://arxiv.org/abs/2312.11414v1
- Date: Mon, 18 Dec 2023 18:18:10 GMT
- Title: Animal-AI 3: What's New & Why You Should Care
- Authors: Konstantinos Voudouris, Ibrahim Alhas, Wout Schellaert, Matthew
Crosby, Joel Holmes, John Burden, Niharika Chaubey, Niall Donnelly,
Matishalin Patel, Marta Halina, Jos\'e Hern\'andez-Orallo, Lucy G. Cheke
- Abstract summary: We present Animal-AI 3, the latest version of the environment.
New features include interactive buttons, reward dispensers, and player notifications.
This paper serves as a stand-alone document that motivates, describes, and demonstrates Animal-AI 3 for the end user.
- Score: 9.351866730254848
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The Animal-AI Environment is a unique game-based research platform designed
to serve both the artificial intelligence and cognitive science research
communities. In this paper, we present Animal-AI 3, the latest version of the
environment, outlining several major new features that make the game more
engaging for humans and more complex for AI systems. New features include
interactive buttons, reward dispensers, and player notifications, as well as an
overhaul of the environment's graphics and processing for significant increases
in agent training time and quality of the human player experience. We provide
detailed guidance on how to build computational and behavioural experiments
with Animal-AI 3. We present results from a series of agents, including the
state-of-the-art Deep Reinforcement Learning agent (dreamer-v3), on newly
designed tests and the Animal-AI Testbed of 900 tasks inspired by research in
comparative psychology. Animal-AI 3 is designed to facilitate collaboration
between the cognitive sciences and artificial intelligence. This paper serves
as a stand-alone document that motivates, describes, and demonstrates Animal-AI
3 for the end user.
Related papers
- EgoPet: Egomotion and Interaction Data from an Animal's Perspective [82.7192364237065]
We introduce a dataset of pet egomotion imagery with diverse examples of simultaneous egomotion and multi-agent interaction.
EgoPet offers a radically distinct perspective from existing egocentric datasets of humans or vehicles.
We define two in-domain benchmark tasks that capture animal behavior, and a third benchmark to assess the utility of EgoPet as a pretraining resource to robotic quadruped locomotion.
arXiv Detail & Related papers (2024-04-15T17:59:47Z) - Scaling Instructable Agents Across Many Simulated Worlds [71.1284502230496]
Our goal is to develop an agent that can accomplish anything a human can do in any simulated 3D environment.
Our approach focuses on language-driven generality while imposing minimal assumptions.
Our agents interact with environments in real-time using a generic, human-like interface.
arXiv Detail & Related papers (2024-03-13T17:50:32Z) - The Ink Splotch Effect: A Case Study on ChatGPT as a Co-Creative Game
Designer [2.778721019132512]
This paper studies how large language models (LLMs) can act as effective, high-level creative collaborators and muses'' for game design.
Our goal is to determine whether AI-assistance can improve, hinder, or provide an alternative quality to games when compared to the creative intents implemented by human designers.
arXiv Detail & Related papers (2024-03-04T20:14:38Z) - DIAMBRA Arena: a New Reinforcement Learning Platform for Research and
Experimentation [91.3755431537592]
This work presents DIAMBRA Arena, a new platform for reinforcement learning research and experimentation.
It features a collection of high-quality environments exposing a Python API fully compliant with OpenAI Gym standard.
They are episodic tasks with discrete actions and observations composed by raw pixels plus additional numerical values.
arXiv Detail & Related papers (2022-10-19T14:39:10Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Toward Human-Level Artificial Intelligence [2.312671485058239]
The term AI is used in a broad meaning, and HLAI is not clearly defined.
I claim that the essence of Human-Level Intelligence to be the capability to learn from others' experiences via language.
I propose a cognitive architecture of HLAI called Modulated Heterarchical Prediction Memory (mHPM)
arXiv Detail & Related papers (2021-08-09T03:39:39Z) - Agents that Listen: High-Throughput Reinforcement Learning with Multiple
Sensory Systems [6.952659395337689]
We introduce a new version of VizDoom simulator to create a highly efficient learning environment that provides raw audio observations.
We train our agent to play the full game of Doom and find that it can consistently defeat a traditional vision-based adversary.
arXiv Detail & Related papers (2021-07-05T18:00:50Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Player-AI Interaction: What Neural Network Games Reveal About AI as Play [14.63311356668699]
This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI.
Through a systematic survey of neural network games, we identified the dominant interaction metaphors and AI interaction patterns.
Our work suggests that game and UX designers should consider flow to structure the learning curve of human-AI interaction.
arXiv Detail & Related papers (2021-01-15T17:07:03Z) - Learning Affordance Landscapes for Interaction Exploration in 3D
Environments [101.90004767771897]
Embodied agents must be able to master how their environment works.
We introduce a reinforcement learning approach for exploration for interaction.
We demonstrate our idea with AI2-iTHOR.
arXiv Detail & Related papers (2020-08-21T00:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.