Construction Grammar and Artificial Intelligence
- URL: http://arxiv.org/abs/2309.00135v1
- Date: Thu, 31 Aug 2023 21:15:06 GMT
- Title: Construction Grammar and Artificial Intelligence
- Authors: Katrien Beuls and Paul Van Eecke
- Abstract summary: We argue that it is beneficial for the contemporary construction grammarian to have a thorough understanding of the strong relationship between the research fields of construction grammar and artificial intelligence.
We show that their relationship is rooted in a common attitude towards human communication and language.
We conclude that the further elaboration of this relationship will play a key role in shaping the future of the field of construction grammar.
- Score: 2.864550757598007
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this chapter, we argue that it is highly beneficial for the contemporary
construction grammarian to have a thorough understanding of the strong
relationship between the research fields of construction grammar and artificial
intelligence. We start by unravelling the historical links between the two
fields, showing that their relationship is rooted in a common attitude towards
human communication and language. We then discuss the first direction of
influence, focussing in particular on how insights and techniques from the
field of artificial intelligence play an important role in operationalising,
validating and scaling constructionist approaches to language. We then proceed
to the second direction of influence, highlighting the relevance of
construction grammar insights and analyses to the artificial intelligence
endeavour of building truly intelligent agents. We support our case with a
variety of illustrative examples and conclude that the further elaboration of
this relationship will play a key role in shaping the future of the field of
construction grammar.
Related papers
- Finding Structure in Language Models [3.882018118763685]
This thesis is about whether language models possess a deep understanding of grammatical structure similar to that of humans.
We will develop novel interpretability techniques that enhance our understanding of the complex nature of large-scale language models.
arXiv Detail & Related papers (2024-11-25T14:37:24Z) - Constructive Approach to Bidirectional Causation between Qualia Structure and Language Emergence [5.906966694759679]
This paper presents a novel perspective on the bidirectional causation between language emergence and relational structure of subjective experiences.
We hypothesize that languages with distributional semantics, e.g., syntactic-semantic structures, may have emerged through the process of aligning internal representations among individuals.
arXiv Detail & Related papers (2024-09-14T11:03:12Z) - Language Models: A Guide for the Perplexed [51.88841610098437]
This tutorial aims to help narrow the gap between those who study language models and those who are intrigued and want to learn more.
We offer a scientific viewpoint that focuses on questions amenable to study through experimentation.
We situate language models as they are today in the context of the research that led to their development.
arXiv Detail & Related papers (2023-11-29T01:19:02Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - Construction Grammar and Language Models [4.171555557592296]
Recent progress in deep learning has given rise to powerful models that are primarily trained on a cloze-like task.
This chapter aims to foster collaboration between researchers in the fields of natural language processing and Construction Grammar.
arXiv Detail & Related papers (2023-08-25T11:37:56Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Knowledge-enhanced Agents for Interactive Text Games [16.055119735473017]
We propose a knowledge-injection framework for improved functional grounding of agents in text-based games.
We consider two forms of domain knowledge that we inject into learning-based agents: memory of previous correct actions and affordances of relevant objects in the environment.
Our framework supports two representative model classes: reinforcement learning agents and language model agents.
arXiv Detail & Related papers (2023-05-08T23:31:39Z) - Knowledge Engineering in the Long Game of Artificial Intelligence: The
Case of Speech Acts [0.6445605125467572]
This paper describes principles and practices of knowledge engineering that enable the development of holistic language-endowed intelligent agents.
We focus on dialog act modeling, a task that has been widely pursued in linguistics, cognitive modeling, and statistical natural language processing.
arXiv Detail & Related papers (2022-02-02T14:05:12Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.