Differentiable Generative Phonology
- URL: http://arxiv.org/abs/2102.05717v2
- Date: Fri, 12 Feb 2021 03:35:57 GMT
- Title: Differentiable Generative Phonology
- Authors: Shijie Wu and Edoardo Maria Ponti and Ryan Cotterell
- Abstract summary: We implement the phonological generative system as a neural model differentiable end-to-end.
Unlike traditional phonology, in our model, UFs are continuous vectors in $mathbbRd$, rather than discrete strings.
We evaluate the ability of each mode to predict attested phonological strings on 2 datasets covering 5 and 28 languages.
- Score: 47.709731661281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of generative phonology, as formulated by Chomsky and Halle (1968),
is to specify a formal system that explains the set of attested phonological
strings in a language. Traditionally, a collection of rules (or constraints, in
the case of optimality theory) and underlying forms (UF) are posited to work in
tandem to generate phonological strings. However, the degree of abstraction of
UFs with respect to their concrete realizations is contentious. As the main
contribution of our work, we implement the phonological generative system as a
neural model differentiable end-to-end, rather than as a set of rules or
constraints. Contrary to traditional phonology, in our model, UFs are
continuous vectors in $\mathbb{R}^d$, rather than discrete strings. As a
consequence, UFs are discovered automatically rather than posited by linguists,
and the model can scale to the size of a realistic vocabulary. Moreover, we
compare several modes of the generative process, contemplating: i) the presence
or absence of an underlying representation in between morphemes and surface
forms (SFs); and ii) the conditional dependence or independence of UFs with
respect to SFs. We evaluate the ability of each mode to predict attested
phonological strings on 2 datasets covering 5 and 28 languages, respectively.
The results corroborate two tenets of generative phonology, viz. the necessity
for UFs and their independence from SFs. In general, our neural model of
generative phonology learns both UFs and SFs automatically and on a
large-scale.
Related papers
- PhonologyBench: Evaluating Phonological Skills of Large Language Models [57.80997670335227]
Phonology, the study of speech's structure and pronunciation rules, is a critical yet often overlooked component in Large Language Model (LLM) research.
We present PhonologyBench, a novel benchmark consisting of three diagnostic tasks designed to explicitly test the phonological skills of LLMs.
We observe a significant gap of 17% and 45% on Rhyme Word Generation and Syllable counting, respectively, when compared to humans.
arXiv Detail & Related papers (2024-04-03T04:53:14Z) - Generative Spoken Language Model based on continuous word-sized audio
tokens [52.081868603603844]
We introduce a Generative Spoken Language Model based on word-size continuous-valued audio embeddings.
The resulting model is the first generative language model based on word-size continuous embeddings.
arXiv Detail & Related papers (2023-10-08T16:46:14Z) - An Information-Theoretic Analysis of Self-supervised Discrete
Representations of Speech [17.07957283733822]
We develop an information-theoretic framework whereby we represent each phonetic category as a distribution over discrete units.
Our study demonstrates that the entropy of phonetic distributions reflects the variability of the underlying speech sounds.
While our study confirms the lack of direct, one-to-one correspondence, we find an intriguing, indirect relationship between phonetic categories and discrete units.
arXiv Detail & Related papers (2023-06-04T16:52:11Z) - Exploring How Generative Adversarial Networks Learn Phonological
Representations [6.119392435448723]
Generative Adversarial Networks (GANs) learn representations of phonological phenomena.
We analyze how GANs encode contrastive and non-contrastive nasality in French and English vowels.
arXiv Detail & Related papers (2023-05-21T16:37:21Z) - Evolution and trade-off dynamics of functional load [0.0]
We apply phylogenetic methods to examine the diachronic evolution of FL across 90 languages of the Pama-Nyungan (PN) family of Australia.
We find a high degree of phylogenetic signal in FL. Though phylogenetic signal has been reported for phonological structures, such as phonotactics, its detection in measures of phonological function is novel.
arXiv Detail & Related papers (2021-12-22T20:57:50Z) - Do Acoustic Word Embeddings Capture Phonological Similarity? An
Empirical Study [12.210797811981173]
In this paper, we ask: does the distance in the acoustic embedding space correlate with phonological dissimilarity?
We train AWE models in controlled settings for two languages (German and Czech) and evaluate the embeddings on two tasks: word discrimination and phonological similarity.
Our experiments show that (1) the distance in the embedding space in the best cases only moderately correlates with phonological distance, and (2) improving the performance on the word discrimination task does not necessarily yield models that better reflect word phonological similarity.
arXiv Detail & Related papers (2021-06-16T10:47:56Z) - Decomposing lexical and compositional syntax and semantics with deep
language models [82.81964713263483]
The activations of language transformers like GPT2 have been shown to linearly map onto brain activity during speech comprehension.
Here, we propose a taxonomy to factorize the high-dimensional activations of language models into four classes: lexical, compositional, syntactic, and semantic representations.
The results highlight two findings. First, compositional representations recruit a more widespread cortical network than lexical ones, and encompass the bilateral temporal, parietal and prefrontal cortices.
arXiv Detail & Related papers (2021-03-02T10:24:05Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - APo-VAE: Text Generation in Hyperbolic Space [116.11974607497986]
In this paper, we investigate text generation in a hyperbolic latent space to learn continuous hierarchical representations.
An Adrial Poincare Variversaational Autoencoder (APo-VAE) is presented, where both the prior and variational posterior of latent variables are defined over a Poincare ball via wrapped normal distributions.
Experiments in language modeling and dialog-response generation tasks demonstrate the winning effectiveness of the proposed APo-VAE model.
arXiv Detail & Related papers (2020-04-30T19:05:41Z) - Do Neural Language Models Show Preferences for Syntactic Formalisms? [14.388237635684737]
We study the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis.
We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages.
We find that both models exhibit a preference for UD over SUD - with interesting variations across languages and layers.
arXiv Detail & Related papers (2020-04-29T11:37:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.