Articulated Animal AI: An Environment for Animal-like Cognition in a Limbed Agent
- URL: http://arxiv.org/abs/2410.09275v1
- Date: Fri, 11 Oct 2024 21:55:23 GMT
- Title: Articulated Animal AI: An Environment for Animal-like Cognition in a Limbed Agent
- Authors: Jeremy Lucas, Isabeau Prémont-Schwarz,
- Abstract summary: Key improvements include the addition of agent limbs, enabling more complex behaviors and interactions with the environment that closely resemble real animal movements.
The testbench features an integrated curriculum training sequence and evaluation tools, eliminating the need for users to develop their own training programs.
- Score: 2.1976444142070393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the Articulated Animal AI Environment for Animal Cognition, an enhanced version of the previous AnimalAI Environment. Key improvements include the addition of agent limbs, enabling more complex behaviors and interactions with the environment that closely resemble real animal movements. The testbench features an integrated curriculum training sequence and evaluation tools, eliminating the need for users to develop their own training programs. Additionally, the tests and training procedures are randomized, which will improve the agent's generalization capabilities. These advancements significantly expand upon the original AnimalAI framework and will be used to evaluate agents on various aspects of animal cognition.
Related papers
- EgoPet: Egomotion and Interaction Data from an Animal's Perspective [82.7192364237065]
We introduce a dataset of pet egomotion imagery with diverse examples of simultaneous egomotion and multi-agent interaction.
EgoPet offers a radically distinct perspective from existing egocentric datasets of humans or vehicles.
We define two in-domain benchmark tasks that capture animal behavior, and a third benchmark to assess the utility of EgoPet as a pretraining resource to robotic quadruped locomotion.
arXiv Detail & Related papers (2024-04-15T17:59:47Z) - The Case for Animal-Friendly AI [0.0]
We develop a proof-of-concept Evaluation System for evaluating animal consideration in large language models (LLMs)
Preliminary results suggest that the outcomes of the tested models can be benchmarked regarding the consideration they give to animals.
This study serves as a step towards more useful and responsible AI systems that better recognize and respect the vital interests and perspectives of all sentient beings.
arXiv Detail & Related papers (2024-03-02T12:41:11Z) - Computer Vision for Primate Behavior Analysis in the Wild [61.08941894580172]
Video-based behavioral monitoring has great potential for transforming how we study animal cognition and behavior.
There is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today.
arXiv Detail & Related papers (2024-01-29T18:59:56Z) - Aquarium: A Comprehensive Framework for Exploring Predator-Prey Dynamics
through Multi-Agent Reinforcement Learning Algorithms [9.225703308176435]
Aquarium is a comprehensive Multi-Agent Reinforcement Learning environment for predator-prey interaction.
It features physics-based agent movement on a two-dimensional, edge-wrapping plane.
The agent-environment interaction (observations, actions, rewards) and the environment settings (agent speed, prey reproduction, predator starvation, and others) are fully customizable.
arXiv Detail & Related papers (2024-01-13T12:09:49Z) - The Animal-AI Environment: A Virtual Laboratory For Comparative Cognition and Artificial Intelligence Research [13.322270147627151]
The Animal-AI Environment is a game-based research platform designed to facilitate collaboration between the artificial intelligence and comparative cognition research communities.
New features include interactive buttons, reward dispensers, and player notifications.
We present results from a series of agents on newly designed tests and the Animal-AI Testbed of 900 tasks inspired by research in the field of comparative cognition.
arXiv Detail & Related papers (2023-12-18T18:18:10Z) - CNN-Based Action Recognition and Pose Estimation for Classifying Animal
Behavior from Videos: A Survey [0.0]
Action recognition, classifying activities performed by one or more subjects in a trimmed video, forms the basis of many techniques.
Deep learning models for human action recognition have progressed over the last decade.
Recent interest in research that incorporates deep learning-based action recognition for classification has increased.
arXiv Detail & Related papers (2023-01-15T20:54:44Z) - DIAMBRA Arena: a New Reinforcement Learning Platform for Research and
Experimentation [91.3755431537592]
This work presents DIAMBRA Arena, a new platform for reinforcement learning research and experimentation.
It features a collection of high-quality environments exposing a Python API fully compliant with OpenAI Gym standard.
They are episodic tasks with discrete actions and observations composed by raw pixels plus additional numerical values.
arXiv Detail & Related papers (2022-10-19T14:39:10Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - AP-10K: A Benchmark for Animal Pose Estimation in the Wild [83.17759850662826]
We propose AP-10K, the first large-scale benchmark for general animal pose estimation.
AP-10K consists of 10,015 images collected and filtered from 23 animal families and 60 species.
Results provide sound empirical evidence on the superiority of learning from diverse animals species in terms of both accuracy and generalization ability.
arXiv Detail & Related papers (2021-08-28T10:23:34Z) - SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation [3.411873646414169]
SyDog is a synthetic dataset of dogs containing ground truth pose and bounding box coordinates.
We demonstrate that pose estimation models trained on SyDog achieve better performance than models trained purely on real data.
arXiv Detail & Related papers (2021-07-31T14:34:40Z) - Cetacean Translation Initiative: a roadmap to deciphering the
communication of sperm whales [97.41394631426678]
Recent research showed the promise of machine learning tools for analyzing acoustic communication in nonhuman species.
We outline the key elements required for the collection and processing of massive bioacoustic data of sperm whales.
The technological capabilities developed are likely to yield cross-applications and advancements in broader communities investigating non-human communication and animal behavioral research.
arXiv Detail & Related papers (2021-04-17T18:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.