"Task-relevant autoencoding" enhances machine learning for human
neuroscience
- URL: http://arxiv.org/abs/2208.08478v2
- Date: Fri, 22 Sep 2023 17:04:28 GMT
- Title: "Task-relevant autoencoding" enhances machine learning for human
neuroscience
- Authors: Seyedmehdi Orouji, Vincent Taschereau-Dumouchel, Aurelio Cortese,
Brian Odegaard, Cody Cushing, Mouslim Cherkaoui, Mitsuo Kawato, Hakwan Lau,
and Megan A. K. Peters
- Abstract summary: In human neuroscience, machine learning can help reveal lower-dimensional neural representations relevant to subjects' behavior.
We developed a Task-Relevant Autoencoder via Enhancement (TRACE), and tested its ability to extract behaviorally-relevant, separable representations.
TRACE outperformed all models nearly unilaterally, showing up to 12% increased classification accuracy and up to 56% improvement in discovering "cleaner", task-relevant representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In human neuroscience, machine learning can help reveal lower-dimensional
neural representations relevant to subjects' behavior. However,
state-of-the-art models typically require large datasets to train, so are prone
to overfitting on human neuroimaging data that often possess few samples but
many input dimensions. Here, we capitalized on the fact that the features we
seek in human neuroscience are precisely those relevant to subjects' behavior.
We thus developed a Task-Relevant Autoencoder via Classifier Enhancement
(TRACE), and tested its ability to extract behaviorally-relevant, separable
representations compared to a standard autoencoder, a variational autoencoder,
and principal component analysis for two severely truncated machine learning
datasets. We then evaluated all models on fMRI data from 59 subjects who
observed animals and objects. TRACE outperformed all models nearly
unilaterally, showing up to 12% increased classification accuracy and up to 56%
improvement in discovering "cleaner", task-relevant representations. These
results showcase TRACE's potential for a wide variety of data related to human
behavior.
Related papers
- Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Reducing Intraspecies and Interspecies Covariate Shift in Traumatic
Brain Injury EEG of Humans and Mice Using Transfer Euclidean Alignment [4.264615907591813]
High variability across subjects poses a significant challenge when it comes to deploying machine learning models for classification tasks in the real world.
In such instances, machine learning models that exhibit exceptional performance on a specific dataset may not necessarily demonstrate similar proficiency when applied to a distinct dataset for the same task.
We introduce Transfer Euclidean Alignment - a transfer learning technique to tackle the problem of the robustness of human biomedical data for training deep learning models.
arXiv Detail & Related papers (2023-10-03T19:48:02Z) - Bayesian Time-Series Classifier for Decoding Simple Visual Stimuli from
Intracranial Neural Activity [0.0]
We propose a straightforward Bayesian time series classifier (BTsC) model that tackles challenges whilst maintaining a high level of interpretability.
We demonstrate the classification capabilities of this approach by utilizing neural data to decode colors in a visual task.
The proposed solution can be applied to neural data recorded in various tasks, where there is a need for interpretable results.
arXiv Detail & Related papers (2023-07-28T17:04:06Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Evaluating deep transfer learning for whole-brain cognitive decoding [11.898286908882561]
Transfer learning (TL) is well-suited to improve the performance of deep learning (DL) models in datasets with small numbers of samples.
Here, we evaluate TL for the application of DL models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data.
arXiv Detail & Related papers (2021-11-01T15:44:49Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.