Leveraging Recursive Processing for Neural-Symbolic Affect-Target
Associations
- URL: http://arxiv.org/abs/2103.03755v1
- Date: Fri, 5 Mar 2021 15:32:38 GMT
- Title: Leveraging Recursive Processing for Neural-Symbolic Affect-Target
Associations
- Authors: A. Sutherland, S. Magg, S. Wermter
- Abstract summary: We present a commonsense approach to associate extracted targets, noun chunks determined to be associated with the expressed emotion, with affective labels from a natural language expression.
We leverage a pre-trained neural network that is well adapted to tree and sub-tree processing, the Dependency Tree-LSTM, to learn the affect labels of dynamic targets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explaining the outcome of deep learning decisions based on affect is
challenging but necessary if we expect social companion robots to interact with
users on an emotional level. In this paper, we present a commonsense approach
that utilizes an interpretable hybrid neural-symbolic system to associate
extracted targets, noun chunks determined to be associated with the expressed
emotion, with affective labels from a natural language expression. We leverage
a pre-trained neural network that is well adapted to tree and sub-tree
processing, the Dependency Tree-LSTM, to learn the affect labels of dynamic
targets, determined through symbolic rules, in natural language. We find that
making use of the unique properties of the recursive network provides higher
accuracy and interpretability when compared to other unstructured and
sequential methods for determining target-affect associations in an
aspect-based sentiment analysis task.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Neurosymbolic AI for Travel Demand Prediction: Integrating Decision Tree Rules into Neural Networks [21.445133878049333]
This study introduces a Neurosymbolic Artificial Intelligence (Neurosymbolic AI) framework that integrates decision tree (DT)-based symbolic rules with neural networks (NNs) to predict travel demand.
arXiv Detail & Related papers (2025-02-02T05:10:31Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - SNeL: A Structured Neuro-Symbolic Language for Entity-Based Multimodal
Scene Understanding [0.0]
We introduce SNeL (Structured Neuro-symbolic Language), a versatile query language designed to facilitate nuanced interactions with neural networks processing multimodal data.
SNeL's expressive interface enables the construction of intricate queries, supporting logical and arithmetic operators, comparators, nesting, and more.
Our evaluations demonstrate SNeL's potential to reshape the way we interact with complex neural networks.
arXiv Detail & Related papers (2023-06-09T17:01:51Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Tell Me Why You Feel That Way: Processing Compositional Dependency for
Tree-LSTM Aspect Sentiment Triplet Extraction (TASTE) [0.0]
We present a hybrid neural-symbolic method utilising a Dependency Tree-LSTM's compositional sentiment parse structure and complementary symbolic rules.
We show that this method has the potential to perform in line with state-of-the-art approaches while also simplifying the data required and providing a degree of interpretability.
arXiv Detail & Related papers (2021-03-10T01:52:10Z) - Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network [10.489983726592303]
We investigate a novel relational graph attention network that integrates typed syntactic dependency information.
Results show that our method can effectively leverage label information for improving targeted sentiment classification performances.
arXiv Detail & Related papers (2020-02-22T11:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.