Generating Sense Embeddings for Syntactic and Semantic Analogy for
Portuguese
- URL: http://arxiv.org/abs/2001.07574v1
- Date: Tue, 21 Jan 2020 14:39:20 GMT
- Title: Generating Sense Embeddings for Syntactic and Semantic Analogy for
Portuguese
- Authors: Jessica Rodrigues da Silva, Helena de Medeiros Caseli
- Abstract summary: We use techniques to generate sense embeddings and present the first experiments carried out for Portuguese.
Our experiments show that sense vectors outperform traditional word vectors in syntactic and semantic analogy tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Word embeddings are numerical vectors which can represent words or concepts
in a low-dimensional continuous space. These vectors are able to capture useful
syntactic and semantic information. The traditional approaches like Word2Vec,
GloVe and FastText have a strict drawback: they produce a single vector
representation per word ignoring the fact that ambiguous words can assume
different meanings. In this paper we use techniques to generate sense
embeddings and present the first experiments carried out for Portuguese. Our
experiments show that sense vectors outperform traditional word vectors in
syntactic and semantic analogy tasks, proving that the language resource
generated here can improve the performance of NLP tasks in Portuguese.
Related papers
- Backpack Language Models [108.65930795825416]
We present Backpacks, a new neural architecture that marries strong modeling performance with an interface for interpretability and control.
We find that, after training, sense vectors specialize, each encoding a different aspect of a word.
We present simple algorithms that intervene on sense vectors to perform controllable text generation and debiasing.
arXiv Detail & Related papers (2023-05-26T09:26:23Z) - Tsetlin Machine Embedding: Representing Words Using Logical Expressions [10.825099126920028]
We introduce a Tsetlin Machine-based autoencoder that learns logical clauses self-supervised.
The clauses consist of contextual words like "black," "cup," and "hot" to define other words like "coffee"
We evaluate our embedding approach on several intrinsic and extrinsic benchmarks, outperforming GLoVe on six classification tasks.
arXiv Detail & Related papers (2023-01-02T15:02:45Z) - Learning Sense-Specific Static Embeddings using Contextualised Word
Embeddings as a Proxy [26.385418377513332]
We propose Context Derived Embeddings of Senses (CDES)
CDES extracts sense related information from contextualised embeddings and injects it into static embeddings to create sense-specific static embeddings.
We show that CDES can accurately learn sense-specific static embeddings reporting comparable performance to the current state-of-the-art sense embeddings.
arXiv Detail & Related papers (2021-10-05T17:50:48Z) - Sense representations for Portuguese: experiments with sense embeddings
and deep neural language models [0.0]
Unsupervised sense representations can induce different senses of a word by analyzing its contextual semantics in a text.
We present the first experiments carried out for generating sense embeddings for Portuguese.
arXiv Detail & Related papers (2021-08-31T18:07:01Z) - Deriving Word Vectors from Contextualized Language Models using
Topic-Aware Mention Selection [46.97185212695267]
We propose a method for learning word representations that follows this basic strategy.
We take advantage of contextualized language models (CLMs) rather than bags of word vectors to encode contexts.
We show that this simple strategy leads to high-quality word vectors, which are more predictive of semantic properties than word embeddings and existing CLM-based strategies.
arXiv Detail & Related papers (2021-06-15T08:02:42Z) - WOVe: Incorporating Word Order in GloVe Word Embeddings [0.0]
Defining a word as a vector makes it easy for machine learning algorithms to understand a text and extract information from it.
Word vector representations have been used in many applications such word synonyms, word analogy, syntactic parsing, and many others.
arXiv Detail & Related papers (2021-05-18T15:28:20Z) - Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks [58.87961226278285]
We propose a self-supervised approach to model lexical semantic change.
We show that our method can be used for the detection of semantic change with any alignment method.
We illustrate the utility of our techniques using experimental results on three different datasets.
arXiv Detail & Related papers (2021-01-30T18:59:43Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Word Rotator's Distance [50.67809662270474]
Key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
We show that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity.
We propose a method that first decouples word vectors into their norm and direction, and then computes alignment-based similarity.
arXiv Detail & Related papers (2020-04-30T17:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.