BELT:Bootstrapping Electroencephalography-to-Language Decoding and
Zero-Shot Sentiment Classification by Natural Language Supervision
- URL: http://arxiv.org/abs/2309.12056v2
- Date: Sun, 10 Dec 2023 02:15:31 GMT
- Title: BELT:Bootstrapping Electroencephalography-to-Language Decoding and
Zero-Shot Sentiment Classification by Natural Language Supervision
- Authors: Jinzhao Zhou, Yiqun Duan, Yu-Cheng Chang, Yu-Kai Wang, Chin-Teng Lin
- Abstract summary: The proposed BELT method is a generic and efficient framework that bootstraps EEG representation learning.
With a large LM's capacity for understanding semantic information and zero-shot generalization, BELT utilizes large LMs trained on Internet-scale datasets.
We achieve state-of-the-art results on two featuring brain decoding tasks including the brain-to-language translation and zero-shot sentiment classification.
- Score: 31.382825932199935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents BELT, a novel model and learning framework for the
pivotal topic of brain-to-language translation research. The translation from
noninvasive brain signals into readable natural language has the potential to
promote the application scenario as well as the development of brain-computer
interfaces (BCI) as a whole. The critical problem in brain signal decoding or
brain-to-language translation is the acquisition of semantically appropriate
and discriminative EEG representation from a dataset of limited scale and
quality. The proposed BELT method is a generic and efficient framework that
bootstraps EEG representation learning using off-the-shelf large-scale
pretrained language models (LMs). With a large LM's capacity for understanding
semantic information and zero-shot generalization, BELT utilizes large LMs
trained on Internet-scale datasets to bring significant improvements to the
understanding of EEG signals.
In particular, the BELT model is composed of a deep conformer encoder and a
vector quantization encoder. Semantical EEG representation is achieved by a
contrastive learning step that provides natural language supervision. We
achieve state-of-the-art results on two featuring brain decoding tasks
including the brain-to-language translation and zero-shot sentiment
classification. Specifically, our model surpasses the baseline model on both
tasks by 5.45% and over 10% and archives a 42.31% BLEU-1 score and 67.32%
precision on the main evaluation metrics for translation and zero-shot
sentiment classification respectively.
Related papers
- Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network [16.317199232071232]
Large Language Models (LLMs) have been shown to be effective models of the human language system.
In this work, we investigate the key architectural components driving the surprising alignment of untrained models.
arXiv Detail & Related papers (2024-06-21T12:54:03Z) - MAD: Multi-Alignment MEG-to-Text Decoding [21.155031900491654]
We present a novel approach for translating MEG signals into text using a speech-decoding framework with multiple alignments.
We achieve an impressive BLEU-1 score on the $textitGWilliams$ dataset, significantly outperforming the baseline from 5.49 to 10.44 on the BLEU-1 metric.
arXiv Detail & Related papers (2024-06-03T16:43:10Z) - Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder [69.7813498468116]
We propose Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text.
We also develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations) to decode text from EEG sequences.
arXiv Detail & Related papers (2024-02-27T11:45:21Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - Deep Representation Learning for Open Vocabulary
Electroencephalography-to-Text Decoding [6.014363449216054]
We present an end-to-end deep learning framework for non-invasive brain recordings that brings modern representational learning approaches to neuroscience.
Our model achieves a BLEU-1 score of 42.75%, a ROUGE-1-F of 33.28%, and a BERTScore-F of 53.86%, outperforming the previous state-of-the-art methods by 3.38%, 8.43%, and 6.31%, respectively.
arXiv Detail & Related papers (2023-11-15T08:03:09Z) - XDBERT: Distilling Visual Information to BERT from Cross-Modal Systems
to Improve Language Understanding [73.24847320536813]
This study explores distilling visual information from pretrained multimodal transformers to pretrained language encoders.
Our framework is inspired by cross-modal encoders' success in visual-language tasks while we alter the learning objective to cater to the language-heavy characteristics of NLU.
arXiv Detail & Related papers (2022-04-15T03:44:00Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - Reinforced Iterative Knowledge Distillation for Cross-Lingual Named
Entity Recognition [54.92161571089808]
Cross-lingual NER transfers knowledge from rich-resource language to languages with low resources.
Existing cross-lingual NER methods do not make good use of rich unlabeled data in target languages.
We develop a novel approach based on the ideas of semi-supervised learning and reinforcement learning.
arXiv Detail & Related papers (2021-06-01T05:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.