On the Effects of Using word2vec Representations in Neural Networks for
Dialogue Act Recognition
- URL: http://arxiv.org/abs/2010.11490v1
- Date: Thu, 22 Oct 2020 07:21:17 GMT
- Title: On the Effects of Using word2vec Representations in Neural Networks for
Dialogue Act Recognition
- Authors: Christophe Cerisara (SYNALP), Pavel Kral, Ladislav Lenc
- Abstract summary: We propose a new deep neural network that explores recurrent models to capture word sequences within sentences.
We validate this model on three languages: English, French and Czech.
- Score: 0.6767885381740952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue act recognition is an important component of a large number of
natural language processing pipelines. Many research works have been carried
out in this area, but relatively few investigate deep neural networks and word
embeddings. This is surprising, given that both of these techniques have proven
exceptionally good in most other language-related domains. We propose in this
work a new deep neural network that explores recurrent models to capture word
sequences within sentences, and further study the impact of pretrained word
embeddings. We validate this model on three languages: English, French and
Czech. The performance of the proposed approach is consistent across these
languages and it is comparable to the state-of-the-art results in English. More
importantly, we confirm that deep neural networks indeed outperform a Maximum
Entropy classifier, which was expected. However , and this is more surprising,
we also found that standard word2vec em-beddings do not seem to bring valuable
information for this task and the proposed model, whatever the size of the
training corpus is. We thus further analyse the resulting embeddings and
conclude that a possible explanation may be related to the mismatch between the
type of lexical-semantic information captured by the word2vec embeddings, and
the kind of relations between words that is the most useful for the dialogue
act recognition task.
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - Multilingual Name Entity Recognition and Intent Classification Employing
Deep Learning Architectures [2.9115403886004807]
We explore the effectiveness of two separate families of Deep Learning networks for named entity recognition and intent classification.
The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages.
arXiv Detail & Related papers (2022-11-04T12:42:29Z) - Sense representations for Portuguese: experiments with sense embeddings
and deep neural language models [0.0]
Unsupervised sense representations can induce different senses of a word by analyzing its contextual semantics in a text.
We present the first experiments carried out for generating sense embeddings for Portuguese.
arXiv Detail & Related papers (2021-08-31T18:07:01Z) - LadRa-Net: Locally-Aware Dynamic Re-read Attention Net for Sentence
Semantic Matching [66.65398852962177]
We develop a novel Dynamic Re-read Network (DRr-Net) for sentence semantic matching.
We extend DRr-Net to Locally-Aware Dynamic Re-read Attention Net (LadRa-Net)
Experiments on two popular sentence semantic matching tasks demonstrate that DRr-Net can significantly improve the performance of sentence semantic matching.
arXiv Detail & Related papers (2021-08-06T02:07:04Z) - AM2iCo: Evaluating Word Meaning in Context across Low-ResourceLanguages
with Adversarial Examples [51.048234591165155]
We present AM2iCo, Adversarial and Multilingual Meaning in Context.
It aims to faithfully assess the ability of state-of-the-art (SotA) representation models to understand the identity of word meaning in cross-lingual contexts.
Results reveal that current SotA pretrained encoders substantially lag behind human performance.
arXiv Detail & Related papers (2021-04-17T20:23:45Z) - A Novel Deep Learning Method for Textual Sentiment Analysis [3.0711362702464675]
This paper proposes a convolutional neural network integrated with a hierarchical attention layer to extract informative words.
The proposed model has higher classification accuracy and can extract informative words.
Applying incremental transfer learning can significantly enhance the classification performance.
arXiv Detail & Related papers (2021-02-23T12:11:36Z) - Effect of Word Embedding Models on Hate and Offensive Speech Detection [1.7403133838762446]
We investigate the impact of both word embedding models and neural network architectures on the predictive accuracy.
We first train several word embedding models on a large-scale unlabelled Arabic text corpus.
For each detection task, we train several neural network classifiers using the pre-trained word embedding models.
This task yields a large number of various learned models, which allows conducting an exhaustive comparison.
arXiv Detail & Related papers (2020-11-23T02:43:45Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Intrinsic Probing through Dimension Selection [69.52439198455438]
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks.
Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it.
In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted.
arXiv Detail & Related papers (2020-10-06T15:21:08Z) - Information-Theoretic Probing for Linguistic Structure [74.04862204427944]
We propose an information-theoretic operationalization of probing as estimating mutual information.
We evaluate on a set of ten typologically diverse languages often underrepresented in NLP research.
arXiv Detail & Related papers (2020-04-07T01:06:36Z) - Neurals Networks for Projecting Named Entities from English to Ewondo [6.058868817939519]
We propose a new distributional representation of words to project named entities from a rich language to a low-resource one.
Although the proposed method reached appreciable results, the size of the used neural network was too large.
In this paper, we show experimentally that the same results can be obtained using a smaller neural network.
arXiv Detail & Related papers (2020-03-29T22:05:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.