Advancing Humor-Focused Sentiment Analysis through Improved
Contextualized Embeddings and Model Architecture
- URL: http://arxiv.org/abs/2011.11773v1
- Date: Mon, 23 Nov 2020 22:30:32 GMT
- Title: Advancing Humor-Focused Sentiment Analysis through Improved
Contextualized Embeddings and Model Architecture
- Authors: Felipe Godoy
- Abstract summary: Humor allows us to express thoughts and feelings conveniently and effectively.
As language models become ubiquitous through virtual-assistants and IOT devices, the need to develop humor-aware models rises exponentially.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Humor is a natural and fundamental component of human interactions. When
correctly applied, humor allows us to express thoughts and feelings
conveniently and effectively, increasing interpersonal affection, likeability,
and trust. However, understanding the use of humor is a computationally
challenging task from the perspective of humor-aware language processing
models. As language models become ubiquitous through virtual-assistants and IOT
devices, the need to develop humor-aware models rises exponentially. To further
improve the state-of-the-art capacity to perform this particular
sentiment-analysis task we must explore models that incorporate contextualized
and nonverbal elements in their design. Ideally, we seek architectures
accepting non-verbal elements as additional embedded inputs to the model,
alongside the original sentence-embedded input. This survey thus analyses the
current state of research in techniques for improved contextualized embedding
incorporating nonverbal information, as well as newly proposed deep
architectures to improve context retention on top of popular word-embeddings
methods.
Related papers
- Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - LOLgorithm: Integrating Semantic,Syntactic and Contextual Elements for Humor Classification [0.0]
We categorize features into syntactic, semantic, and contextual dimensions, including lexicons, structural statistics, Word2Vec, WordNet, and phonetic style.
Our proposed model, Colbert, utilizes BERT embeddings and parallel hidden layers to capture sentence congruity.
SHAP interpretations and decision trees identify influential features, revealing that a holistic approach improves humor detection accuracy on unseen data.
arXiv Detail & Related papers (2024-08-12T17:52:11Z) - Development of Compositionality and Generalization through Interactive Learning of Language and Action of Robots [1.7624347338410742]
We propose a brain-inspired neural network model that integrates vision, proprioception, and language into a framework of predictive coding and active inference.
Our results show that generalization in learning to unlearned verb-noun compositions, is significantly enhanced when training variations of task composition are increased.
arXiv Detail & Related papers (2024-03-29T06:22:37Z) - Systematic Literature Review: Computational Approaches for Humour Style
Classification [0.2455468619225742]
We study the landscape of computational techniques applied to binary humour and sarcasm recognition.
We identify potential research gaps and outlined promising directions.
The SLR provides access to existing datasets related to humour and sarcasm, facilitating the work of future researchers.
arXiv Detail & Related papers (2024-01-30T16:21:47Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - Contextualized word senses: from attention to compositionality [0.10878040851637999]
We propose a transparent, interpretable, and linguistically motivated strategy for encoding the contextual sense of words.
Particular attention is given to dependency relations and semantic notions such as selection preferences and paradigmatic classes.
arXiv Detail & Related papers (2023-12-01T16:04:00Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Visually-Situated Natural Language Understanding with Contrastive
Reading Model and Frozen Large Language Models [24.456117679941816]
Contrastive Reading Model (Cream) is a novel neural architecture designed to enhance the language-image understanding capability of Large Language Models (LLMs)
Our approach bridges the gap between vision and language understanding, paving the way for the development of more sophisticated Document Intelligence Assistants.
arXiv Detail & Related papers (2023-05-24T11:59:13Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.