Synonymy = Translational Equivalence
- URL: http://arxiv.org/abs/2004.13886v2
- Date: Fri, 11 Dec 2020 22:58:05 GMT
- Title: Synonymy = Translational Equivalence
- Authors: Bradley Hauer, Grzegorz Kondrak
- Abstract summary: Synonymy and translational equivalence are the relations of sameness of meaning within and across languages.
This paper proposes a unifying treatment of these two relations, which is validated by experiments on existing resources.
- Score: 6.198307677263333
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Synonymy and translational equivalence are the relations of sameness of
meaning within and across languages. As the principal relations in wordnets and
multi-wordnets, they are vital to computational lexical semantics, yet the
field suffers from the absence of a common formal framework to define their
properties and relationship. This paper proposes a unifying treatment of these
two relations, which is validated by experiments on existing resources. In our
view, synonymy and translational equivalence are simply different types of
semantic identity. The theory establishes a solid foundation for critically
re-evaluating prior work in cross-lingual semantics, and facilitating the
creation, verification, and amelioration of lexical resources.
Related papers
- Using Synchronic Definitions and Semantic Relations to Classify Semantic Change Types [1.3436368800886478]
We present a model that leverages information both from synchronic lexical relations and definitions of word meanings.
Specifically, we use synset definitions and hierarchy information from WordNet and test it on a digitized version of Blank's (1997) dataset of semantic change types.
arXiv Detail & Related papers (2024-06-05T16:52:21Z) - How well do distributed representations convey contextual lexical semantics: a Thesis Proposal [3.3585951129432323]
In this thesis, we examine the efficacy of distributed representations from modern neural networks in encoding lexical meaning.
We identify four sources of ambiguity based on the relatedness and similarity of meanings influenced by context.
We then aim to evaluate these sources by collecting or constructing multilingual datasets, leveraging various language models, and employing linguistic analysis tools.
arXiv Detail & Related papers (2024-06-02T14:08:51Z) - Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language [65.268245109828]
We propose a Distributional Semantic resource enriched with linguistic and lexical information extracted from electronic dictionaries.
The resource comprises 21 domain-specific matrices, one comprehensive matrix, and a Graphical User Interface.
Our model facilitates the generation of reasoned semantic descriptions of concepts by selecting matrices directly associated with concrete conceptual knowledge.
arXiv Detail & Related papers (2024-02-26T15:04:35Z) - Regularized Conventions: Equilibrium Computation as a Model of Pragmatic
Reasoning [72.21876989058858]
We present a model of pragmatic language understanding, where utterances are produced and understood by searching for regularized equilibria of signaling games.
In this model speakers and listeners search for contextually appropriate utterance--meaning mappings that are both close to game-theoretically optimal conventions and close to a shared, ''default'' semantics.
arXiv Detail & Related papers (2023-11-16T09:42:36Z) - Agentivit\`a e telicit\`a in GilBERTo: implicazioni cognitive [77.71680953280436]
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics.
The semantic properties considered are telicity (also combined with definiteness) and agentivity.
arXiv Detail & Related papers (2023-07-06T10:52:22Z) - Embracing Ambiguity: Improving Similarity-oriented Tasks with Contextual
Synonym Knowledge [30.010315144903885]
Contextual synonym knowledge is crucial for similarity-oriented tasks.
Most Pre-trained Language Models (PLMs) lack synonym knowledge due to inherent limitations of their pre-training objectives.
We propose PICSO, a flexible framework that supports the injection of contextual synonym knowledge from multiple domains into PLMs.
arXiv Detail & Related papers (2022-11-20T15:25:19Z) - Patterns of Lexical Ambiguity in Contextualised Language Models [9.747449805791092]
We introduce an extended, human-annotated dataset of graded word sense similarity and co-predication.
Both types of human judgements indicate that the similarity of polysemic interpretations falls in a continuum between identity of meaning and homonymy.
Our dataset appears to capture a substantial part of the complexity of lexical ambiguity, and can provide a realistic test bed for contextualised embeddings.
arXiv Detail & Related papers (2021-09-27T13:11:44Z) - Rethinking Crowd Sourcing for Semantic Similarity [0.13999481573773073]
This paper investigates the ambiguities inherent in crowd-sourced semantic labeling.
It shows that annotators that treat semantic similarity as a binary category play the most important role in the labeling.
arXiv Detail & Related papers (2021-09-24T13:57:30Z) - Speakers Fill Lexical Semantic Gaps with Context [65.08205006886591]
We operationalise the lexical ambiguity of a word as the entropy of meanings it can take.
We find significant correlations between our estimate of ambiguity and the number of synonyms a word has in WordNet.
This suggests that, in the presence of ambiguity, speakers compensate by making contexts more informative.
arXiv Detail & Related papers (2020-10-05T17:19:10Z) - Where New Words Are Born: Distributional Semantic Analysis of Neologisms
and Their Semantic Neighborhoods [51.34667808471513]
We investigate the importance of two factors, semantic sparsity and frequency growth rates of semantic neighbors, formalized in the distributional semantics paradigm.
We show that both factors are predictive word emergence although we find more support for the latter hypothesis.
arXiv Detail & Related papers (2020-01-21T19:09:49Z) - Multiplex Word Embeddings for Selectional Preference Acquisition [70.33531759861111]
We propose a multiplex word embedding model, which can be easily extended according to various relations among words.
Our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness.
arXiv Detail & Related papers (2020-01-09T04:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.