Givenness Hierarchy Theoretic Cognitive Status Filtering
- URL: http://arxiv.org/abs/2005.11267v1
- Date: Fri, 22 May 2020 16:44:14 GMT
- Title: Givenness Hierarchy Theoretic Cognitive Status Filtering
- Authors: Poulomi Pal, Lixiao Zhu, Andrea Golden-Lasher, Akshay Swaminathan, Tom
Williams
- Abstract summary: Humans use pronouns due to implicit assumptions about the cognitive statuses their referents have in the minds of their conversational partners.
We present two models of cognitive status: a rule-based Finite State Machine model and a Cognitive Status Filter.
The models are demonstrated and evaluated using a silver-standard English subset of the OFAI Multimodal Task Description Corpus.
- Score: 1.689482889925796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For language-capable interactive robots to be effectively introduced into
human society, they must be able to naturally and efficiently communicate about
the objects, locations, and people found in human environments. An important
aspect of natural language communication is the use of pronouns. Ac-cording to
the linguistic theory of the Givenness Hierarchy(GH), humans use pronouns due
to implicit assumptions about the cognitive statuses their referents have in
the minds of their conversational partners. In previous work, Williams et al.
presented the first computational implementation of the full GH for the purpose
of robot language understanding, leveraging a set of rules informed by the GH
literature. However, that approach was designed specifically for language
understanding,oriented around GH-inspired memory structures used to assess what
entities are candidate referents given a particular cognitive status. In
contrast, language generation requires a model in which cognitive status can be
assessed for a given entity. We present and compare two such models of
cognitive status: a rule-based Finite State Machine model directly informed by
the GH literature and a Cognitive Status Filter designed to more flexibly
handle uncertainty. The models are demonstrated and evaluated using a
silver-standard English subset of the OFAI Multimodal Task Description Corpus.
Related papers
- Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - Standard Language Ideology in AI-Generated Language [1.2815904071470705]
We explore standard language ideology in language generated by large language models (LLMs)
We introduce the concept of standard AI-generated language ideology, the process by which AI-generated language regards Standard American English (SAE) as a linguistic default and reinforces a linguistic bias that SAE is the most "appropriate" language.
arXiv Detail & Related papers (2024-06-13T01:08:40Z) - Learning with Language-Guided State Abstractions [58.199148890064826]
Generalizable policy learning in high-dimensional observation spaces is facilitated by well-designed state representations.
Our method, LGA, uses a combination of natural language supervision and background knowledge from language models to automatically build state representations tailored to unseen tasks.
Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time.
arXiv Detail & Related papers (2024-02-28T23:57:04Z) - Language Generation from Brain Recordings [68.97414452707103]
We propose a generative language BCI that utilizes the capacity of a large language model and a semantic brain decoder.
The proposed model can generate coherent language sequences aligned with the semantic content of visual or auditory language stimuli.
Our findings demonstrate the potential and feasibility of employing BCIs in direct language generation.
arXiv Detail & Related papers (2023-11-16T13:37:21Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - A Linguistic Investigation of Machine Learning based Contradiction
Detection Models: An Empirical Analysis and Future Perspectives [0.34998703934432673]
We analyze two Natural Language Inference data sets with respect to their linguistic features.
The goal is to identify those syntactic and semantic properties that are particularly hard to comprehend for a machine learning model.
arXiv Detail & Related papers (2022-10-19T10:06:03Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - ReferentialGym: A Nomenclature and Framework for Language Emergence &
Grounding in (Visual) Referential Games [0.30458514384586394]
Natural languages are powerful tools wielded by human beings to communicate information and co-operate towards common goals.
computational linguists have been researching the emergence of in artificial languages induced by language games.
The AI community has started to investigate language emergence and grounding working towards better human-machine interfaces.
arXiv Detail & Related papers (2020-12-17T10:22:15Z) - Toward Givenness Hierarchy Theoretic Natural Language Generation [2.4505259300326334]
A key aspect of such communication is the use of anaphoric language.
The linguistic theory of the Givenness Hierarchy(GH) suggests that humans use anaphora based on the cognitive statuses their referents have in the minds of their interlocutors.
In this paper we describe how the GH might need to be used quite differently to facilitate robot anaphora generation.
arXiv Detail & Related papers (2020-07-17T17:51:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.