Anaphoric Structure Emerges Between Neural Networks
- URL: http://arxiv.org/abs/2308.07984v1
- Date: Tue, 15 Aug 2023 18:34:26 GMT
- Title: Anaphoric Structure Emerges Between Neural Networks
- Authors: Nicholas Edwards, Hannah Rohde, and Henry Conklin
- Abstract summary: Pragmatics is core to natural language, enabling speakers to communicate efficiently with structures like ellipsis and anaphora.
Despite potential to introduce ambiguity, anaphora is ubiquitous across human language.
We show that languages with anaphoric structures are learnable by neural networks.
- Score: 3.0518581575184225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pragmatics is core to natural language, enabling speakers to communicate
efficiently with structures like ellipsis and anaphora that can shorten
utterances without loss of meaning. These structures require a listener to
interpret an ambiguous form - like a pronoun - and infer the speaker's intended
meaning - who that pronoun refers to. Despite potential to introduce ambiguity,
anaphora is ubiquitous across human language. In an effort to better understand
the origins of anaphoric structure in natural language, we look to see if
analogous structures can emerge between artificial neural networks trained to
solve a communicative task. We show that: first, despite the potential for
increased ambiguity, languages with anaphoric structures are learnable by
neural models. Second, anaphoric structures emerge between models 'naturally'
without need for additional constraints. Finally, introducing an explicit
efficiency pressure on the speaker increases the prevalence of these
structures. We conclude that certain pragmatic structures straightforwardly
emerge between neural networks, without explicit efficiency pressures, but that
the competing needs of speakers and listeners conditions the degree and nature
of their emergence.
Related papers
- Audio-Visual Neural Syntax Acquisition [91.14892278795892]
We study phrase structure induction from visually-grounded speech.
We present the Audio-Visual Neural Syntax Learner (AV-NSL) that learns phrase structure by listening to audio and looking at images, without ever being exposed to text.
arXiv Detail & Related papers (2023-10-11T16:54:57Z) - What makes a language easy to deep-learn? Deep neural networks and humans similarly benefit from compositional structure [5.871583927216651]
A fundamental property of language is its compositional structure, allowing humans to produce forms for new meanings.
For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures.
This learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning.
arXiv Detail & Related papers (2023-02-23T18:57:34Z) - Sentences as connection paths: A neural language architecture of
sentence structure in the brain [0.0]
Article presents a neural language architecture of sentence structure in the brain.
Words remain 'in-situ', hence they are always content-addressable.
Arbitrary and novel sentences (with novel words) can be created with 'neural blackboards' for words and sentences.
arXiv Detail & Related papers (2022-05-19T13:58:45Z) - Color Overmodification Emerges from Data-Driven Learning and Pragmatic
Reasoning [53.088796874029974]
We show that speakers' referential expressions depart from communicative ideals in ways that help illuminate the nature of pragmatic language use.
By adopting neural networks as learning agents, we show that overmodification is more likely with environmental features that are infrequent or salient.
arXiv Detail & Related papers (2022-05-18T18:42:43Z) - Emergent Communication for Understanding Human Language Evolution:
What's Missing? [1.2891210250935146]
We discuss three important phenomena with respect to the emergence and benefits of compositionality.
We argue that one possible reason for these mismatches is that key cognitive and communicative constraints of humans are not yet integrated.
arXiv Detail & Related papers (2022-04-22T09:21:53Z) - Neural Abstructions: Abstractions that Support Construction for Grounded
Language Learning [69.1137074774244]
Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding.
We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model.
We show that with this method a user population is able to build a semantic modification for an open-ended house task in Minecraft.
arXiv Detail & Related papers (2021-07-20T07:01:15Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z) - Compositional Languages Emerge in a Neural Iterated Learning Model [27.495624644227888]
compositionality enables natural language to represent complex concepts via a structured combination of simpler ones.
We propose an effective neural iterated learning (NIL) algorithm that, when applied to interacting neural agents, facilitates the emergence of a more structured type of language.
arXiv Detail & Related papers (2020-02-04T15:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.