The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution
- URL: http://arxiv.org/abs/2308.00262v1
- Date: Tue, 1 Aug 2023 03:46:59 GMT
- Title: The Algonauts Project 2023 Challenge: UARK-UAlbany Team Solution
- Authors: Xuan-Bac Nguyen, Xudong Liu, Xin Li, Khoa Luu
- Abstract summary: This work presents our solutions to the Algonauts Project 2023 Challenge.
The primary objective of the challenge revolves around employing computational models to anticipate brain responses.
We constructed an image-based brain encoder through a two-step training process to tackle this challenge.
- Score: 21.714597774964194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents our solutions to the Algonauts Project 2023 Challenge. The
primary objective of the challenge revolves around employing computational
models to anticipate brain responses captured during participants' observation
of intricate natural visual scenes. The goal is to predict brain responses
across the entire visual brain, as it is the region where the most reliable
responses to images have been observed. We constructed an image-based brain
encoder through a two-step training process to tackle this challenge.
Initially, we created a pretrained encoder using data from all subjects. Next,
we proceeded to fine-tune individual subjects. Each step employed different
training strategies, such as different loss functions and objectives, to
introduce diversity. Ultimately, our solution constitutes an ensemble of
multiple unique encoders. The code is available at
https://github.com/uark-cviu/Algonauts2023
Related papers
- Toward Generalizing Visual Brain Decoding to Unseen Subjects [20.897856078151506]
We first consolidate an image-fMRI dataset consisting of stimulus-image and fMRI-response pairs, involving 177 subjects in the movie-viewing task of the Human Connectome Project (HCP)
We then present a learning paradigm that applies uniform processing across all subjects, instead of employing different network heads or tokenizers for individuals as in previous methods.
Our findings reveal the inherent similarities in brain activities across individuals.
arXiv Detail & Related papers (2024-10-18T13:04:35Z) - V3Det Challenge 2024 on Vast Vocabulary and Open Vocabulary Object Detection: Methods and Results [142.5704093410454]
The V3Det Challenge 2024 aims to push the boundaries of object detection research.
The challenge consists of two tracks: Vast Vocabulary Object Detection and Open Vocabulary Object Detection.
We aim to inspire future research directions in vast vocabulary and open-vocabulary object detection.
arXiv Detail & Related papers (2024-06-17T16:58:51Z) - Wills Aligner: A Robust Multi-Subject Brain Representation Learner [19.538200208523467]
We introduce Wills Aligner, a robust multi-subject brain representation learner.
Wills Aligner initially aligns different subjects' brains at the anatomical level.
It incorporates a mixture of brain experts to learn individual cognition patterns.
arXiv Detail & Related papers (2024-04-20T06:01:09Z) - The Brain Tumor Segmentation (BraTS) Challenge: Local Synthesis of Healthy Brain Tissue via Inpainting [50.01582455004711]
For brain tumor patients, the image acquisition time series typically starts with an already pathological scan.
Many algorithms are designed to analyze healthy brains and provide no guarantee for images featuring lesions.
Examples include, but are not limited to, algorithms for brain anatomy parcellation, tissue segmentation, and brain extraction.
Here, the participants explore inpainting techniques to synthesize healthy brain scans from lesioned ones.
arXiv Detail & Related papers (2023-05-15T20:17:03Z) - A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision [93.90545426665999]
We take a close look at autoregressive decoders for multi-task learning in multimodal computer vision.
A key finding is that a small decoder learned on top of a frozen pretrained encoder works surprisingly well.
It can be seen as teaching a decoder to interact with a pretrained vision model via natural language.
arXiv Detail & Related papers (2023-03-30T13:42:58Z) - Decoding speech perception from non-invasive brain recordings [48.46819575538446]
We introduce a model trained with contrastive-learning to decode self-supervised representations of perceived speech from non-invasive recordings.
Our model can identify, from 3 seconds of MEG signals, the corresponding speech segment with up to 41% accuracy out of more than 1,000 distinct possibilities.
arXiv Detail & Related papers (2022-08-25T10:01:43Z) - Many Heads but One Brain: an Overview of Fusion Brain Challenge on AI
Journey 2021 [46.56884693120608]
The Fusion Brain Challenge aims to make the universal architecture process different modalities.
We have created datasets for each task to test the participants' submissions on it.
The Russian part of the dataset is the largest Russian handwritten dataset in the world.
arXiv Detail & Related papers (2021-11-22T03:46:52Z) - Woodscape Fisheye Semantic Segmentation for Autonomous Driving -- CVPR
2021 OmniCV Workshop Challenge [2.3469719108972504]
WoodScape fisheye semantic segmentation challenge for autonomous driving was held as part of the CVPR 2021 Workshop on Omnidirectional Computer Vision.
We provide a summary of the competition which attracted the participation of 71 global teams and a total of 395 submissions.
The top teams recorded significantly improved mean IoU and accuracy scores over the baseline PSPNet with ResNet-50 backbone.
arXiv Detail & Related papers (2021-07-17T14:32:58Z) - The Algonauts Project 2021 Challenge: How the Human Brain Makes Sense of
a World in Motion [0.0]
We release the 2021 edition of the Algonauts Project Challenge: How the Human Brain Makes Sense of a World in Motion.
We provide whole-brain fMRI responses recorded while 10 human participants viewed a rich set of over 1,000 short video clips depicting everyday events.
The goal of the challenge is to accurately predict brain responses to these video clips.
arXiv Detail & Related papers (2021-04-28T11:38:31Z) - NTIRE 2020 Challenge on Real-World Image Super-Resolution: Methods and
Results [148.54397669654958]
This paper reviews the NTIRE 2020 challenge on real world super-resolution.
The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable.
In total 22 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem.
arXiv Detail & Related papers (2020-05-05T08:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.