Neurosymbolic AI for Situated Language Understanding
- URL: http://arxiv.org/abs/2012.02947v1
- Date: Sat, 5 Dec 2020 05:03:28 GMT
- Title: Neurosymbolic AI for Situated Language Understanding
- Authors: Nikhil Krishnaswamy and James Pustejovsky
- Abstract summary: We argue that computational situated grounding provides a solution to some of these learning challenges.
Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence.
We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges.
- Score: 13.249453757295083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, data-intensive AI, particularly the domain of natural
language processing and understanding, has seen significant progress driven by
the advent of large datasets and deep neural networks that have sidelined more
classic AI approaches to the field. These systems can apparently demonstrate
sophisticated linguistic understanding or generation capabilities, but often
fail to transfer their skills to situations they have not encountered before.
We argue that computational situated grounding provides a solution to some of
these learning challenges by creating situational representations that both
serve as a formal model of the salient phenomena, and contain rich amounts of
exploitable, task-appropriate data for training new, flexible computational
models. Our model reincorporates some ideas of classic AI into a framework of
neurosymbolic intelligence, using multimodal contextual modeling of interactive
situations, events, and object properties. We discuss how situated grounding
provides diverse data and multiple levels of modeling for a variety of AI
learning challenges, including learning how to interact with object
affordances, learning semantics for novel structures and configurations, and
transferring such learned knowledge to new objects and situations.
Related papers
- ARPA: A Novel Hybrid Model for Advancing Visual Word Disambiguation Using Large Language Models and Transformers [1.6541870997607049]
We present ARPA, an architecture that fuses the unparalleled contextual understanding of large language models with the advanced feature extraction capabilities of transformers.
ARPA's introduction marks a significant milestone in visual word disambiguation, offering a compelling solution.
We invite researchers and practitioners to explore the capabilities of our model, envisioning a future where such hybrid models drive unprecedented advancements in artificial intelligence.
arXiv Detail & Related papers (2024-08-12T10:15:13Z) - Are Large Language Models the New Interface for Data Pipelines? [3.5021689991926377]
A Language Model is a term that encompasses various types of models designed to understand and generate human communication.
Large Language Models (LLMs) have gained significant attention due to their ability to process text with human-like fluency and coherence.
arXiv Detail & Related papers (2024-06-06T08:10:32Z) - A Survey on Vision-Language-Action Models for Embodied AI [71.16123093739932]
Vision-language-action models (VLAs) have become a foundational element in robot learning.
Various methods have been proposed to enhance traits such as versatility, dexterity, and generalizability.
VLAs serve as high-level task planners capable of decomposing long-horizon tasks into executable subtasks.
arXiv Detail & Related papers (2024-05-23T01:43:54Z) - Deep Learning Approaches for Improving Question Answering Systems in
Hepatocellular Carcinoma Research [0.0]
In recent years, advancements in natural language processing (NLP) have been fueled by deep learning techniques.
BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation.
This paper delves into the current landscape and future prospects of large-scale model-based NLP.
arXiv Detail & Related papers (2024-02-25T09:32:17Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - SNeL: A Structured Neuro-Symbolic Language for Entity-Based Multimodal
Scene Understanding [0.0]
We introduce SNeL (Structured Neuro-symbolic Language), a versatile query language designed to facilitate nuanced interactions with neural networks processing multimodal data.
SNeL's expressive interface enables the construction of intricate queries, supporting logical and arithmetic operators, comparators, nesting, and more.
Our evaluations demonstrate SNeL's potential to reshape the way we interact with complex neural networks.
arXiv Detail & Related papers (2023-06-09T17:01:51Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - Collective Intelligence for Deep Learning: A Survey of Recent
Developments [11.247894240593691]
We will provide a historical context of neural network research's involvement with complex systems.
We will highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence.
arXiv Detail & Related papers (2021-11-29T08:39:32Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.