Language Generation for Broad-Coverage, Explainable Cognitive Systems
- URL: http://arxiv.org/abs/2201.10422v1
- Date: Tue, 25 Jan 2022 16:09:19 GMT
- Title: Language Generation for Broad-Coverage, Explainable Cognitive Systems
- Authors: Marjorie McShane and Ivan Leon
- Abstract summary: This paper describes recent progress on natural language generation for language-endowed intelligent agents (LEIAs) developed within the OntoAgent cognitive architecture.
It uses the same knowledge bases, theory of computational linguistics, agent architecture, and methodology of developing broad-coverage capabilities over time while still supporting near-term applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes recent progress on natural language generation (NLG) for
language-endowed intelligent agents (LEIAs) developed within the OntoAgent
cognitive architecture. The approach draws heavily from past work on natural
language understanding in this paradigm: it uses the same knowledge bases,
theory of computational linguistics, agent architecture, and methodology of
developing broad-coverage capabilities over time while still supporting
near-term applications.
Related papers
- Neurosymbolic Graph Enrichment for Grounded World Models [47.92947508449361]
We present a novel approach to enhance and exploit LLM reactive capability to address complex problems.
We create a multimodal, knowledge-augmented formal representation of meaning that combines the strengths of large language models with structured semantic representations.
By bridging the gap between unstructured language models and formal semantic structures, our method opens new avenues for tackling intricate problems in natural language understanding and reasoning.
arXiv Detail & Related papers (2024-11-19T17:23:55Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Deep Learning Approaches for Improving Question Answering Systems in
Hepatocellular Carcinoma Research [0.0]
In recent years, advancements in natural language processing (NLP) have been fueled by deep learning techniques.
BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation.
This paper delves into the current landscape and future prospects of large-scale model-based NLP.
arXiv Detail & Related papers (2024-02-25T09:32:17Z) - Formal Aspects of Language Modeling [74.16212987886013]
Large language models have become one of the most commonly deployed NLP inventions.
These notes are the accompaniment to the theoretical portion of the ETH Z"urich course on large language models.
arXiv Detail & Related papers (2023-11-07T20:21:42Z) - Rethinking the Evaluating Framework for Natural Language Understanding
in AI Systems: Language Acquisition as a Core for Future Metrics [0.0]
In the burgeoning field of artificial intelligence (AI), the unprecedented progress of large language models (LLMs) in natural language processing (NLP) offers an opportunity to revisit the entire approach of traditional metrics of machine intelligence.
Our paper proposes a paradigm shift from the established Turing Test towards an all-embracing framework that hinges on language acquisition.
arXiv Detail & Related papers (2023-09-21T11:34:52Z) - Cognitive Architectures for Language Agents [44.89258267600489]
We propose Cognitive Architectures for Language Agents (CoALA)
CoALA describes a language agent with modular memory components, a structured action space to interact with internal memory and external environments, and a generalized decision-making process to choose actions.
We use CoALA to retrospectively survey and organize a large body of recent work, and prospectively identify actionable directions towards more capable agents.
arXiv Detail & Related papers (2023-09-05T17:56:20Z) - Exploiting Language Models as a Source of Knowledge for Cognitive Agents [4.557963624437782]
Large language models (LLMs) provide capabilities far beyond sentence completion, including question answering, summarization, and natural-language inference.
While many of these capabilities have potential application to cognitive systems, our research is exploiting language models as a source of task knowledge for cognitive agents, that is, agents realized via a cognitive architecture.
arXiv Detail & Related papers (2023-09-05T15:18:04Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - MRKL Systems: A modular, neuro-symbolic architecture that combines large
language models, external knowledge sources and discrete reasoning [50.40151403246205]
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks.
We define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules.
We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL) system.
arXiv Detail & Related papers (2022-05-01T11:01:28Z) - Knowledge Engineering in the Long Game of Artificial Intelligence: The
Case of Speech Acts [0.6445605125467572]
This paper describes principles and practices of knowledge engineering that enable the development of holistic language-endowed intelligent agents.
We focus on dialog act modeling, a task that has been widely pursued in linguistics, cognitive modeling, and statistical natural language processing.
arXiv Detail & Related papers (2022-02-02T14:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.