A Model of Anaphoric Ambiguities using Sheaf Theoretic Quantum-like
Contextuality and BERT
- URL: http://arxiv.org/abs/2208.05720v1
- Date: Thu, 11 Aug 2022 09:31:15 GMT
- Title: A Model of Anaphoric Ambiguities using Sheaf Theoretic Quantum-like
Contextuality and BERT
- Authors: Kin Ian Lo (University College London, London, UK), Mehrnoosh
Sadrzadeh (University College London, London, UK), Shane Mansfield (Quandela,
Paris, France)
- Abstract summary: We construct a schema for anaphoric ambiguities that exhibits quantum-like contextuality.
We then take advantage of the neural word embedding engine BERT to instantiate the schema to natural language examples.
Our hope is that these examples will pave the way for future research and for finding ways to extend applications of quantum computing to natural language processing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ambiguities of natural language do not preclude us from using it and context
helps in getting ideas across. They, nonetheless, pose a key challenge to the
development of competent machines to understand natural language and use it as
humans do. Contextuality is an unparalleled phenomenon in quantum mechanics,
where different mathematical formalisms have been put forwards to understand
and reason about it. In this paper, we construct a schema for anaphoric
ambiguities that exhibits quantum-like contextuality. We use a recently
developed criterion of sheaf-theoretic contextuality that is applicable to
signalling models. We then take advantage of the neural word embedding engine
BERT to instantiate the schema to natural language examples and extract
probability distributions for the instances. As a result, plenty of
sheaf-contextual examples were discovered in the natural language corpora BERT
utilises. Our hope is that these examples will pave the way for future research
and for finding ways to extend applications of quantum computing to natural
language processing.
Related papers
- A Quantum-Inspired Analysis of Human Disambiguation Processes [0.0]
In this thesis, we apply formalisms arising from foundational quantum mechanics to study ambiguities arising from linguistics.
Results were subsequently used to predict human behaviour and outperformed current NLP methods.
arXiv Detail & Related papers (2024-08-14T09:21:23Z) - Quantum Natural Language Processing [0.03495246564946555]
Language processing is at the heart of current developments in artificial intelligence.
This paper surveys the state of this area, showing how NLP-related techniques have been used in quantum language processing.
arXiv Detail & Related papers (2024-03-28T18:15:07Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - A Quantum Natural Language Processing Approach to Pronoun Resolution [1.5293427903448022]
We use the Lambek Calculus to model and reason about discourse relations such as anaphora and ellipsis.
A semantics for this logic is obtained by using truncated Fock spaces, developed in our previous work.
We extend the existing translation to Fock spaces and develop quantum circuit semantics for discourse relations.
arXiv Detail & Related papers (2022-08-10T15:22:58Z) - On the probability-quality paradox in language generation [76.69397802617064]
We analyze language generation through an information-theoretic lens.
We posit that human-like language should contain an amount of information close to the entropy of the distribution over natural strings.
arXiv Detail & Related papers (2022-03-31T17:43:53Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - On the Quantum-like Contextuality of Ambiguous Phrases [2.6381163133447836]
We show that meaning combinations in ambiguous phrases can be modelled in the sheaf-theoretic framework for quantum contextuality.
Using the framework of Contextuality-by-Default (CbD), we explore the probabilistic variants of these and show that CbD-contextuality is also possible.
arXiv Detail & Related papers (2021-07-19T13:23:42Z) - Towards Coinductive Models for Natural Language Understanding. Bringing
together Deep Learning and Deep Semantics [0.0]
Coinduction has been successfully used in the design of operating systems and programming languages.
It has been present in text mining, machine translation, and in some attempts to model intensionality and modalities.
This article shows several examples of the joint appearance of induction and coinduction in natural language processing.
arXiv Detail & Related papers (2020-12-09T03:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.