A Quantum-Inspired Analysis of Human Disambiguation Processes
- URL: http://arxiv.org/abs/2408.07402v1
- Date: Wed, 14 Aug 2024 09:21:23 GMT
- Title: A Quantum-Inspired Analysis of Human Disambiguation Processes
- Authors: Daphne Wang,
- Abstract summary: In this thesis, we apply formalisms arising from foundational quantum mechanics to study ambiguities arising from linguistics.
Results were subsequently used to predict human behaviour and outperformed current NLP methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Formal languages are essential for computer programming and are constructed to be easily processed by computers. In contrast, natural languages are much more challenging and instigated the field of Natural Language Processing (NLP). One major obstacle is the ubiquity of ambiguities. Recent advances in NLP have led to the development of large language models, which can resolve ambiguities with high accuracy. At the same time, quantum computers have gained much attention in recent years as they can solve some computational problems faster than classical computers. This new computing paradigm has reached the fields of machine learning and NLP, where hybrid classical-quantum learning algorithms have emerged. However, more research is needed to identify which NLP tasks could benefit from a genuine quantum advantage. In this thesis, we applied formalisms arising from foundational quantum mechanics, such as contextuality and causality, to study ambiguities arising from linguistics. By doing so, we also reproduced psycholinguistic results relating to the human disambiguation process. These results were subsequently used to predict human behaviour and outperformed current NLP methods.
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models [63.188607839223046]
This survey focuses on the benefits of scaling compute during inference.
We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation.
arXiv Detail & Related papers (2024-06-24T17:45:59Z) - Quantum Natural Language Processing [0.03495246564946555]
Language processing is at the heart of current developments in artificial intelligence.
This paper surveys the state of this area, showing how NLP-related techniques have been used in quantum language processing.
arXiv Detail & Related papers (2024-03-28T18:15:07Z) - Deep Learning Approaches for Improving Question Answering Systems in
Hepatocellular Carcinoma Research [0.0]
In recent years, advancements in natural language processing (NLP) have been fueled by deep learning techniques.
BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation.
This paper delves into the current landscape and future prospects of large-scale model-based NLP.
arXiv Detail & Related papers (2024-02-25T09:32:17Z) - Natural Language Processing for Dialects of a Language: A Survey [56.93337350526933]
State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets.
This survey delves into an important attribute of these datasets: the dialect of a language.
Motivated by the performance degradation of NLP models for dialectic datasets and its implications for the equity of language technologies, we survey past research in NLP for dialects in terms of datasets, and approaches.
arXiv Detail & Related papers (2024-01-11T03:04:38Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Exponential separations between classical and quantum learners [2.209921757303168]
We discuss how subtle differences in definitions can result in significantly different requirements and tasks for the learner to meet and solve.
We present two new learning separations where the classical difficulty primarily lies in identifying the function generating the data.
arXiv Detail & Related papers (2023-06-28T08:55:56Z) - Quantum Natural Language Processing based Sentiment Analysis using
lambeq Toolkit [0.5735035463793007]
Quantum natural language processing (QNLP) is a young and gradually emerging technology which has the potential to provide quantum advantage for NLP tasks.
We show the first application of QNLP for sentiment analysis and achieve perfect test set accuracy for three different kinds of simulations and a decent accuracy for experiments ran on a noisy quantum device.
arXiv Detail & Related papers (2023-05-30T19:54:02Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Grammar-Aware Question-Answering on Quantum Computers [0.17205106391379021]
We perform the first implementation of an NLP task on noisy intermediate-scale quantum (NISQ) hardware.
We encode word-meanings in quantum states and we explicitly account for grammatical structure.
Our novel QNLP model shows concrete promise for scalability as the quality of the quantum hardware improves.
arXiv Detail & Related papers (2020-12-07T14:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.