CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals
- URL: http://arxiv.org/abs/2106.05544v3
- Date: Tue, 14 Nov 2023 07:42:42 GMT
- Title: CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals
- Authors: Yuqi Ren and Deyi Xiong
- Abstract summary: We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
- Score: 60.921888445317705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most previous studies integrate cognitive language processing signals (e.g.,
eye-tracking or EEG data) into neural models of natural language processing
(NLP) just by directly concatenating word embeddings with cognitive features,
ignoring the gap between the two modalities (i.e., textual vs. cognitive) and
noise in cognitive features. In this paper, we propose a CogAlign approach to
these issues, which learns to align textual neural representations to cognitive
features. In CogAlign, we use a shared encoder equipped with a modality
discriminator to alternatively encode textual and cognitive inputs to capture
their differences and commonalities. Additionally, a text-aware attention
mechanism is proposed to detect task-related information and to avoid using
noise in cognitive features. Experimental results on three NLP tasks, namely
named entity recognition, sentiment analysis and relation extraction, show that
CogAlign achieves significant improvements with multiple cognitive features
over state-of-the-art models on public datasets. Moreover, our model is able to
transfer cognitive information to other datasets that do not have any cognitive
processing signals.
Related papers
- Collaborative Cognitive Diagnosis with Disentangled Representation Learning for Learner Modeling [14.574222901039155]
Leveraging collaborative connections among similar learners proves valuable in comprehending human learning.
We present Coral, a Collaborative cognitive diagnosis model with disentangled representation learning.
arXiv Detail & Related papers (2024-11-04T13:13:25Z) - Visual Neural Decoding via Improved Visual-EEG Semantic Consistency [3.4061238650474657]
Methods that directly map EEG features to the CLIP embedding space may introduce mapping bias and cause semantic inconsistency.
We propose a Visual-EEG Semantic Decouple Framework that explicitly extracts the semantic-related features of these two modalities to facilitate optimal alignment.
Our method achieves state-of-the-art results in zero-shot neural decoding tasks.
arXiv Detail & Related papers (2024-08-13T10:16:10Z) - Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition [96.62264528407863]
We propose a self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency.
Inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling.
Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin.
arXiv Detail & Related papers (2024-06-15T04:50:19Z) - Decoding Visual Neural Representations by Multimodal Learning of
Brain-Visual-Linguistic Features [9.783560855840602]
This paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features.
We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models.
In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories.
arXiv Detail & Related papers (2022-10-13T05:49:33Z) - Retrieval-Augmented Transformer for Image Captioning [51.79146669195357]
We develop an image captioning approach with a kNN memory, with which knowledge can be retrieved from an external corpus to aid the generation process.
Our architecture combines a knowledge retriever based on visual similarities, a differentiable encoder, and a kNN-augmented attention layer to predict tokens.
Experimental results, conducted on the COCO dataset, demonstrate that employing an explicit external memory can aid the generation process and increase caption quality.
arXiv Detail & Related papers (2022-07-26T19:35:49Z) - Bridging between Cognitive Processing Signals and Linguistic Features
via a Unified Attentional Network [25.235060468310696]
We propose a data-driven method to investigate the relationship between cognitive processing signals and linguistic features.
We present a unified attentional framework that is composed of embedding, attention, encoding and predicting layers.
The proposed framework can be used to detect a wide range of linguistic features with a single cognitive dataset.
arXiv Detail & Related papers (2021-12-16T12:25:11Z) - Cognition-aware Cognate Detection [46.69412510723641]
We propose a novel method for enriching the feature sets, with cognitive features extracted from human readers' gaze behaviour.
We collect gaze behaviour data for a small sample of cognates and show that extracted cognitive features help the task of cognate detection.
We use the collected gaze behaviour data to predict cognitive features for a larger sample and show that predicted cognitive features, also, significantly improve the task performance.
arXiv Detail & Related papers (2021-12-15T12:48:04Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.