Insights from the Algonauts 2025 Winners
- URL: http://arxiv.org/abs/2508.10784v1
- Date: Thu, 14 Aug 2025 16:11:07 GMT
- Title: Insights from the Algonauts 2025 Winners
- Authors: Paul S. Scotti, Mihir Tripathy,
- Abstract summary: Algonauts 2025 Challenge is a biennial challenge in computational neuroscience.<n>Teams attempt to build models that predict human brain activity from carefully curated stimuli.<n>MedARC team placed 4th in the competition.
- Score: 0.4932861065408857
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Algonauts 2025 Challenge just wrapped up a few weeks ago. It is a biennial challenge in computational neuroscience in which teams attempt to build models that predict human brain activity from carefully curated stimuli. Previous editions (2019, 2021, 2023) focused on still images and short videos; the 2025 edition, which concluded last month (late July), pushed the field further by using long, multimodal movies. Teams were tasked with predicting fMRI responses across 1,000 whole-brain parcels across four participants in the dataset who were scanned while watching nearly 80 hours of naturalistic movie stimuli. These recordings came from the CNeuroMod project and included 65 hours of training data, about 55 hours of Friends (seasons 1-6) plus four feature films (The Bourne Supremacy, Hidden Figures, Life, and The Wolf of Wall Street). The remaining data were used for validation: Season 7 of Friends for in-distribution tests, and the final winners for the Challenge were those who could best predict brain activity for six films in their held-out out-of-distribution (OOD) set. The winners were just announced and the top team reports are now publicly available. As members of the MedARC team which placed 4th in the competition, we reflect on the approaches that worked, what they reveal about the current state of brain encoding, and what might come next.
Related papers
- SoccerNet 2025 Challenges Results [205.71032061537747]
SoccerNet 2025 Challenges mark the fifth annual edition of the SoccerNet open effort, dedicated to advancing computer vision research in football video understanding.<n>This year's challenges span four vision-based tasks: Team Ball Action Spotting, Monocular Depth Estimation, Multi-View Foul Recognition, and Game State Reconstruction.<n>Report presents the results of each challenge, highlights the top-performing solutions, and provides insights into the progress made by the community.
arXiv Detail & Related papers (2025-08-26T16:37:07Z) - NTIRE 2025 XGC Quality Assessment Challenge: Methods and Results [107.50621348972328]
The NTIRE 2025 XGC Quality Assessment Challenge will be held in conjunction with the New Trends in Image Restoration and Enhancement Workshop (NTIRE) at CVPR 2025.<n>The challenge is divided into three tracks, including user generated video, AI generated video and talking head.<n>Each participating team in every track has proposed a method that outperforms the baseline, which has contributed to the development of fields in three tracks.
arXiv Detail & Related papers (2025-06-03T13:39:57Z) - NTIRE 2025 Challenge on UGC Video Enhancement: Methods and Results [73.23764765210825]
This paper presents an overview of the NTIRE 2025 Challenge on Video Enhancement.<n>The challenge constructed a set of 150 user-generated content videos without reference ground truth.<n>The goal of the participants was to develop an algorithm capable of improving the visual quality of such videos.
arXiv Detail & Related papers (2025-05-05T20:06:11Z) - AIM 2024 Challenge on Video Saliency Prediction: Methods and Results [105.09572982350532]
This paper reviews the Challenge on Video Saliency Prediction at AIM 2024.
The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences.
arXiv Detail & Related papers (2024-09-23T08:59:22Z) - NTIRE 2024 Quality Assessment of AI-Generated Content Challenge [141.37864527005226]
The challenge is divided into the image track and the video track.
The winning methods in both tracks have demonstrated superior prediction performance on AIGC.
arXiv Detail & Related papers (2024-04-25T15:36:18Z) - The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution [21.714597774964194]
This work presents our solutions to the Algonauts Project 2023 Challenge.
The primary objective of the challenge revolves around employing computational models to anticipate brain responses.
We constructed an image-based brain encoder through a two-step training process to tackle this challenge.
arXiv Detail & Related papers (2023-08-01T03:46:59Z) - NTIRE 2023 Quality Assessment of Video Enhancement Challenge [97.809937484099]
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge.
The challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos.
The challenge has a total of 167 registered participants.
arXiv Detail & Related papers (2023-07-19T02:33:42Z) - The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of
a World in Motion [0.0]
We release the 2021 edition of the Algonauts Project Challenge: How the Human Brain Makes Sense of a World in Motion.
We provide whole-brain fMRI responses recorded while 10 human participants viewed a rich set of over 1,000 short video clips depicting everyday events.
The goal of the challenge is to accurately predict brain responses to these video clips.
arXiv Detail & Related papers (2021-04-28T11:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.