The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of
a World in Motion
- URL: http://arxiv.org/abs/2104.13714v1
- Date: Wed, 28 Apr 2021 11:38:31 GMT
- Title: The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of
a World in Motion
- Authors: R.M. Cichy, K. Dwivedi, B. Lahner, A. Lascelles, P. Iamshchinina, M.
Graumann, A. Andonian, N.A.R. Murty, K. Kay, G. Roig, A. Oliva
- Abstract summary: We release the 2021 edition of the Algonauts Project Challenge: How the Human Brain Makes Sense of a World in Motion.
We provide whole-brain fMRI responses recorded while 10 human participants viewed a rich set of over 1,000 short video clips depicting everyday events.
The goal of the challenge is to accurately predict brain responses to these video clips.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sciences of natural and artificial intelligence are fundamentally
connected. Brain-inspired human-engineered AI are now the standard for
predicting human brain responses during vision, and conversely, the brain
continues to inspire invention in AI. To promote even deeper connections
between these fields, we here release the 2021 edition of the Algonauts Project
Challenge: How the Human Brain Makes Sense of a World in Motion
(http://algonauts.csail.mit.edu/). We provide whole-brain fMRI responses
recorded while 10 human participants viewed a rich set of over 1,000 short
video clips depicting everyday events. The goal of the challenge is to
accurately predict brain responses to these video clips. The format of our
challenge ensures rapid development, makes results directly comparable and
transparent, and is open to all. In this way it facilitates interdisciplinary
collaboration towards a common goal of understanding visual intelligence. The
2021 Algonauts Project is conducted in collaboration with the Cognitive
Computational Neuroscience (CCN) conference.
Related papers
- AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Digital twin brain: a bridge between biological intelligence and
artificial intelligence [12.55159053727258]
We propose the Digital Twin Brain (DTB) as a transformative platform that bridges the gap between biological and artificial intelligence.
The DTB consists of three core elements: the brain structure that is fundamental to the twinning process, bottom-layer models to generate brain functions, and its wide spectrum of applications.
arXiv Detail & Related papers (2023-08-03T03:36:22Z) - The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution [21.714597774964194]
This work presents our solutions to the Algonauts Project 2023 Challenge.
The primary objective of the challenge revolves around employing computational models to anticipate brain responses.
We constructed an image-based brain encoder through a two-step training process to tackle this challenge.
arXiv Detail & Related papers (2023-08-01T03:46:59Z) - Deep Neural Networks and Brain Alignment: Brain Encoding and Decoding (Survey) [9.14580723964253]
Can we obtain insights about the brain using AI models?
How is the information in deep learning models related to brain recordings?
Decoding models solve the inverse problem of reconstructing stimuli given the fMRI.
Inspired by the effectiveness of deep learning models for natural language processing, computer vision, and speech, several neural encoding and decoding models have been recently proposed.
arXiv Detail & Related papers (2023-07-17T06:54:36Z) - Mindstorms in Natural Language-Based Societies of Mind [110.05229611910478]
Minsky's "society of mind" and Schmidhuber's "learning to think" inspire diverse societies of large multimodal neural networks (NNs)
Recent implementations of NN-based societies of minds consist of large language models (LLMs) and other NN-based experts communicating through a natural language interface.
In these natural language-based societies of mind (NLSOMs), new agents -- all communicating through the same universal symbolic language -- are easily added in a modular fashion.
arXiv Detail & Related papers (2023-05-26T16:21:25Z) - A Neurodiversity-Inspired Solver for the Abstraction \& Reasoning Corpus
(ARC) Using Visual Imagery and Program Synthesis [6.593059418464748]
We propose a new AI approach to core knowledge that combines visual representations of core knowledge inspired by human mental imagery abilities.
We demonstrate our system's performance on the very difficult Abstraction & Reasoning (ARC) challenge.
We share experimental results from publicly available ARC items as well as from our 4th-place finish on the private test set during the 2022 global ARCathon challenge.
arXiv Detail & Related papers (2023-02-18T21:30:44Z) - The Algonauts Project 2023 Challenge: How the Human Brain Makes Sense of
Natural Scenes [0.0]
We introduce the 2023 installment of the Algonauts Project challenge: How the Human Brain Makes Sense of Natural Scenes.
This installment prompts the fields of artificial and biological intelligence to come together towards building computational models of the visual brain.
The challenge is open to all and makes results directly comparable and transparent through a public leaderboard automatically updated after each submission.
arXiv Detail & Related papers (2023-01-09T08:27:36Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Decoding speech perception from non-invasive brain recordings [48.46819575538446]
We introduce a model trained with contrastive-learning to decode self-supervised representations of perceived speech from non-invasive recordings.
Our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities.
arXiv Detail & Related papers (2022-08-25T10:01:43Z) - IGLU 2022: Interactive Grounded Language Understanding in a
Collaborative Environment at NeurIPS 2022 [63.07251290802841]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to develop interactive embodied agents.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community.
arXiv Detail & Related papers (2022-05-27T06:12:48Z) - NeurIPS 2021 Competition IGLU: Interactive Grounded Language
Understanding in a Collaborative Environment [71.11505407453072]
We propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.
The primary goal of the competition is to approach the problem of how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment.
This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL)
arXiv Detail & Related papers (2021-10-13T07:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.