CogniFNN: A Fuzzy Neural Network Framework for Cognitive Word Embedding
Evaluation
- URL: http://arxiv.org/abs/2009.11485v2
- Date: Thu, 29 Jul 2021 05:34:32 GMT
- Title: CogniFNN: A Fuzzy Neural Network Framework for Cognitive Word Embedding
Evaluation
- Authors: Xinping Liu, Zehong Cao, Son Tran
- Abstract summary: We proposed the CogniFNN framework, which is the first attempt at using fuzzy neural networks to extract non-linear and non-stationary characteristics for evaluations of English word embeddings.
We used 15 human cognitive datasets across three modalities: EEG, fMRI, and eye-tracking.
Compared to the recent pioneer framework, our proposed CogniFNN showed smaller prediction errors of both context-independent (GloVe) and context-sensitive (BERT) word embeddings.
- Score: 18.23390072160049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Word embeddings can reflect the semantic representations, and the embedding
qualities can be comprehensively evaluated with human natural reading-related
cognitive data sources. In this paper, we proposed the CogniFNN framework,
which is the first attempt at using fuzzy neural networks to extract non-linear
and non-stationary characteristics for evaluations of English word embeddings
against the corresponding cognitive datasets. In our experiment, we used 15
human cognitive datasets across three modalities: EEG, fMRI, and eye-tracking,
and selected the mean square error and multiple hypotheses testing as metrics
to evaluate our proposed CogniFNN framework. Compared to the recent pioneer
framework, our proposed CogniFNN showed smaller prediction errors of both
context-independent (GloVe) and context-sensitive (BERT) word embeddings, and
achieved higher significant ratios with randomly generated word embeddings. Our
findings suggested that the CogniFNN framework could provide a more accurate
and comprehensive evaluation of cognitive word embeddings. It will potentially
be beneficial to the further word embeddings evaluation on extrinsic natural
language processing tasks.
Related papers
- Precision, Stability, and Generalization: A Comprehensive Assessment of RNNs learnability capability for Classifying Counter and Dyck Languages [9.400009043451046]
This study investigates the learnability of Recurrent Neural Networks (RNNs) in classifying structured formal languages.
Traditionally, both first-order (LSTM) and second-order (O2RNN) RNNs have been considered effective for such tasks.
arXiv Detail & Related papers (2024-10-04T03:22:49Z) - Convolutional Neural Networks for Sentiment Analysis on Weibo Data: A
Natural Language Processing Approach [0.228438857884398]
This study addresses the complex task of sentiment analysis on a dataset of 119,988 original tweets from Weibo using a Convolutional Neural Network (CNN)
A CNN-based model was utilized, leveraging word embeddings for feature extraction, and trained to perform sentiment classification.
The model achieved a macro-average F1-score of approximately 0.73 on the test set, showing balanced performance across positive, neutral, and negative sentiments.
arXiv Detail & Related papers (2023-07-13T03:02:56Z) - Lexical semantics enhanced neural word embeddings [4.040491121427623]
hierarchy-fitting is a novel approach to modelling semantic similarity nuances inherently stored in the IS-A hierarchies.
Results demonstrate the efficacy of hierarchy-fitting in specialising neural embeddings with semantic relations in late fusion.
arXiv Detail & Related papers (2022-10-03T08:10:23Z) - Initial Study into Application of Feature Density and
Linguistically-backed Embedding to Improve Machine Learning-based
Cyberbullying Detection [54.83707803301847]
The research was conducted on a Formspring dataset provided in a Kaggle competition on automatic cyberbullying detection.
The study confirmed the effectiveness of Neural Networks in cyberbullying detection and the correlation between classifier performance and Feature Density.
arXiv Detail & Related papers (2022-06-04T03:17:15Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - A Survey On Neural Word Embeddings [0.4822598110892847]
The study of meaning in natural language processing relies on the distributional hypothesis.
The revolutionary idea of distributed representation for a concept is close to the working of a human mind.
Neural word embeddings transformed the whole field of NLP by introducing substantial improvements in all NLP tasks.
arXiv Detail & Related papers (2021-10-05T03:37:57Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.