Color Overmodification Emerges from Data-Driven Learning and Pragmatic
Reasoning
- URL: http://arxiv.org/abs/2205.09172v1
- Date: Wed, 18 May 2022 18:42:43 GMT
- Title: Color Overmodification Emerges from Data-Driven Learning and Pragmatic
Reasoning
- Authors: Fei Fang, Kunal Sinha, Noah D. Goodman, Christopher Potts, Elisa
Kreiss
- Abstract summary: We show that speakers' referential expressions depart from communicative ideals in ways that help illuminate the nature of pragmatic language use.
By adopting neural networks as learning agents, we show that overmodification is more likely with environmental features that are infrequent or salient.
- Score: 53.088796874029974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Speakers' referential expressions often depart from communicative ideals in
ways that help illuminate the nature of pragmatic language use. Patterns of
overmodification, in which a speaker uses a modifier that is redundant given
their communicative goal, have proven especially informative in this regard. It
seems likely that these patterns are shaped by the environment a speaker is
exposed to in complex ways. Unfortunately, systematically manipulating these
factors during human language acquisition is impossible. In this paper, we
propose to address this limitation by adopting neural networks (NN) as learning
agents. By systematically varying the environments in which these agents are
trained, while keeping the NN architecture constant, we show that
overmodification is more likely with environmental features that are infrequent
or salient. We show that these findings emerge naturally in the context of a
probabilistic model of pragmatic communication.
Related papers
- Addressing the Blind Spots in Spoken Language Processing [4.626189039960495]
We argue that understanding human communication requires a more holistic approach that goes beyond textual or spoken words to include non-verbal elements.
We propose the development of universal automatic gesture segmentation and transcription models to transcribe these non-verbal cues into textual form.
arXiv Detail & Related papers (2023-09-06T10:29:25Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - Preliminary study on using vector quantization latent spaces for TTS/VC
systems with consistent performance [55.10864476206503]
We investigate the use of quantized vectors to model the latent linguistic embedding.
By enforcing different policies over the latent spaces in the training, we are able to obtain a latent linguistic embedding.
Our experiments show that the voice cloning system built with vector quantization has only a small degradation in terms of perceptive evaluations.
arXiv Detail & Related papers (2021-06-25T07:51:35Z) - Evaluating Models of Robust Word Recognition with Serial Reproduction [8.17947290421835]
We compare several broad-coverage probabilistic generative language models in their ability to capture human linguistic expectations.
We find that those models that make use of abstract representations of preceding linguistic context best predict the changes made by people in the course of serial reproduction.
arXiv Detail & Related papers (2021-01-24T20:16:12Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Local and non-local dependency learning and emergence of rule-like
representations in speech data by Deep Convolutional Generative Adversarial
Networks [0.0]
This paper argues that training GANs on local and non-local dependencies in speech data offers insights into how deep neural networks discretize continuous data.
arXiv Detail & Related papers (2020-09-27T00:02:34Z) - Generative Adversarial Phonology: Modeling unsupervised phonetic and
phonological learning with neural networks [0.0]
Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations.
This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture.
We propose a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties.
arXiv Detail & Related papers (2020-06-06T20:31:23Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.