On the Transferability of Neural Models of Morphological Analogies
- URL: http://arxiv.org/abs/2108.03938v1
- Date: Mon, 9 Aug 2021 11:08:33 GMT
- Title: On the Transferability of Neural Models of Morphological Analogies
- Authors: Safa Alsaidi, Amandine Decker, Puthineath Lay, Esteban Marquer,
Pierre-Alexandre Murena, Miguel Couceiro
- Abstract summary: In this paper, we focus on morphological tasks and we propose a deep learning approach to detect morphological analogies.
We present an empirical study to see how our framework transfers across languages, and that highlights interesting similarities and differences between these languages.
In view of these results, we also discuss the possibility of building a multilingual morphological model.
- Score: 7.89271130004391
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Analogical proportions are statements expressed in the form "A is to B as C
is to D" and are used for several reasoning and classification tasks in
artificial intelligence and natural language processing (NLP). In this paper,
we focus on morphological tasks and we propose a deep learning approach to
detect morphological analogies. We present an empirical study to see how our
framework transfers across languages, and that highlights interesting
similarities and differences between these languages. In view of these results,
we also discuss the possibility of building a multilingual morphological model.
Related papers
- Morphological Typology in BPE Subword Productivity and Language Modeling [0.0]
We focus on languages with synthetic and analytical morphological structures and examine their productivity when tokenized.
Experiments reveal that languages with synthetic features exhibit greater subword regularity and productivity with BPE tokenization.
arXiv Detail & Related papers (2024-10-31T06:13:29Z) - Linguistically Grounded Analysis of Language Models using Shapley Head Values [2.914115079173979]
We investigate the processing of morphosyntactic phenomena by leveraging a recently proposed method for probing language models via Shapley Head Values (SHVs)
Using the English language BLiMP dataset, we test our approach on two widely used models, BERT and RoBERTa, and compare how linguistic constructions are handled.
Our results show that SHV-based attributions reveal distinct patterns across both models, providing insights into how language models organize and process linguistic information.
arXiv Detail & Related papers (2024-10-17T09:48:08Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base [51.777618249271725]
ANALOGYKB is a million-scale analogy knowledge base derived from existing knowledge graphs (KGs)
It identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs)
arXiv Detail & Related papers (2023-05-10T09:03:01Z) - Same Neurons, Different Languages: Probing Morphosyntax in Multilingual
Pre-trained Models [84.86942006830772]
We conjecture that multilingual pre-trained models can derive language-universal abstractions about grammar.
We conduct the first large-scale empirical study over 43 languages and 14 morphosyntactic categories with a state-of-the-art neuron-level probe.
arXiv Detail & Related papers (2022-05-04T12:22:31Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Tackling Morphological Analogies Using Deep Learning -- Extended Version [8.288496996031684]
Analogical proportions are statements of the form "A is to B as C is to D"
We propose an approach using deep learning to detect and solve morphological analogies.
We demonstrate our model's competitive performance on analogy detection and resolution over multiple languages.
arXiv Detail & Related papers (2021-11-09T13:45:23Z) - A Neural Approach for Detecting Morphological Analogies [7.89271130004391]
Analogical proportions are statements of the form "A is to B as C is to D"
We propose a deep learning approach to detect morphological analogies.
arXiv Detail & Related papers (2021-08-09T11:21:55Z) - A Comparative Study of Lexical Substitution Approaches based on Neural
Language Models [117.96628873753123]
We present a large-scale comparative study of popular neural language and masked language models.
We show that already competitive results achieved by SOTA LMs/MLMs can be further improved if information about the target word is injected properly.
arXiv Detail & Related papers (2020-05-29T18:43:22Z) - Evaluating Transformer-Based Multilingual Text Classification [55.53547556060537]
We argue that NLP tools perform unequally across languages with different syntactic and morphological structures.
We calculate word order and morphological similarity indices to aid our empirical study.
arXiv Detail & Related papers (2020-04-29T03:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.