Leveraging Recursive Processing for Neural-Symbolic Affect-Target
Associations
- URL: http://arxiv.org/abs/2103.03755v1
- Date: Fri, 5 Mar 2021 15:32:38 GMT
- Title: Leveraging Recursive Processing for Neural-Symbolic Affect-Target
Associations
- Authors: A. Sutherland, S. Magg, S. Wermter
- Abstract summary: We present a commonsense approach to associate extracted targets, noun chunks determined to be associated with the expressed emotion, with affective labels from a natural language expression.
We leverage a pre-trained neural network that is well adapted to tree and sub-tree processing, the Dependency Tree-LSTM, to learn the affect labels of dynamic targets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explaining the outcome of deep learning decisions based on affect is
challenging but necessary if we expect social companion robots to interact with
users on an emotional level. In this paper, we present a commonsense approach
that utilizes an interpretable hybrid neural-symbolic system to associate
extracted targets, noun chunks determined to be associated with the expressed
emotion, with affective labels from a natural language expression. We leverage
a pre-trained neural network that is well adapted to tree and sub-tree
processing, the Dependency Tree-LSTM, to learn the affect labels of dynamic
targets, determined through symbolic rules, in natural language. We find that
making use of the unique properties of the recursive network provides higher
accuracy and interpretability when compared to other unstructured and
sequential methods for determining target-affect associations in an
aspect-based sentiment analysis task.
Related papers
- Bias-Free Sentiment Analysis through Semantic Blinding and Graph Neural Networks [0.0]
The SProp GNN relies exclusively on syntactic structures and word-level emotional cues to predict emotions in text.
By semantically blinding the model to information about specific words, it is robust to biases such as political or gender bias.
The SProp GNN shows performance superior to lexicon-based alternatives on two different prediction tasks, and across two languages.
arXiv Detail & Related papers (2024-11-19T13:23:53Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - SNeL: A Structured Neuro-Symbolic Language for Entity-Based Multimodal
Scene Understanding [0.0]
We introduce SNeL (Structured Neuro-symbolic Language), a versatile query language designed to facilitate nuanced interactions with neural networks processing multimodal data.
SNeL's expressive interface enables the construction of intricate queries, supporting logical and arithmetic operators, comparators, nesting, and more.
Our evaluations demonstrate SNeL's potential to reshape the way we interact with complex neural networks.
arXiv Detail & Related papers (2023-06-09T17:01:51Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Few-shot Learning in Emotion Recognition of Spontaneous Speech Using a
Siamese Neural Network with Adaptive Sample Pair Formation [11.592365534228895]
This paper proposes a few-shot learning approach for automatically recognizing emotion in spontaneous speech from a small number of labelled samples.
Few-shot learning is implemented via a metric learning approach through a siamese neural network.
Results indicate the feasibility of the proposed metric learning in recognizing emotions from spontaneous speech in four datasets.
arXiv Detail & Related papers (2021-09-07T08:04:02Z) - pix2rule: End-to-end Neuro-symbolic Rule Learning [84.76439511271711]
This paper presents a complete neuro-symbolic method for processing images into objects, learning relations and logical rules.
The main contribution is a differentiable layer in a deep learning architecture from which symbolic relations and rules can be extracted.
We demonstrate that our model scales beyond state-of-the-art symbolic learners and outperforms deep relational neural network architectures.
arXiv Detail & Related papers (2021-06-14T15:19:06Z) - Tell Me Why You Feel That Way: Processing Compositional Dependency for
Tree-LSTM Aspect Sentiment Triplet Extraction (TASTE) [0.0]
We present a hybrid neural-symbolic method utilising a Dependency Tree-LSTM's compositional sentiment parse structure and complementary symbolic rules.
We show that this method has the potential to perform in line with state-of-the-art approaches while also simplifying the data required and providing a degree of interpretability.
arXiv Detail & Related papers (2021-03-10T01:52:10Z) - Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network [10.489983726592303]
We investigate a novel relational graph attention network that integrates typed syntactic dependency information.
Results show that our method can effectively leverage label information for improving targeted sentiment classification performances.
arXiv Detail & Related papers (2020-02-22T11:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.