Cluster-based Deep Ensemble Learning for Emotion Classification in
Internet Memes
- URL: http://arxiv.org/abs/2302.08343v1
- Date: Thu, 16 Feb 2023 15:01:07 GMT
- Title: Cluster-based Deep Ensemble Learning for Emotion Classification in
Internet Memes
- Authors: Xiaoyu Guo, Jing Ma, Arkaitz Zubiaga
- Abstract summary: We propose a novel model, cluster-based deep ensemble learning (CDEL), for emotion classification in memes.
CDEL is a hybrid model that leverages the benefits of a deep learning model in combination with a clustering algorithm.
We evaluate the performance of CDEL on a benchmark dataset for emotion classification.
- Score: 18.86848589288164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memes have gained popularity as a means to share visual ideas through the
Internet and social media by mixing text, images and videos, often for humorous
purposes. Research enabling automated analysis of memes has gained attention in
recent years, including among others the task of classifying the emotion
expressed in memes. In this paper, we propose a novel model, cluster-based deep
ensemble learning (CDEL), for emotion classification in memes. CDEL is a hybrid
model that leverages the benefits of a deep learning model in combination with
a clustering algorithm, which enhances the model with additional information
after clustering memes with similar facial features. We evaluate the
performance of CDEL on a benchmark dataset for emotion classification, proving
its effectiveness by outperforming a wide range of baseline models and
achieving state-of-the-art performance. Further evaluation through ablated
models demonstrates the effectiveness of the different components of CDEL.
Related papers
- Emotion Detection in Reddit: Comparative Study of Machine Learning and Deep Learning Techniques [0.0]
This study concentrates on text-based emotion detection by leveraging the GoEmotions dataset.
We employed a range of models for this task, including six machine learning models, three ensemble models, and a Long Short-Term Memory (LSTM) model.
Results indicate that the Stacking classifier outperforms other models in accuracy and performance.
arXiv Detail & Related papers (2024-11-15T16:28:25Z) - ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model [49.587821411012705]
We propose ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model.
It distills the knowledge from a large teacher CLIP model into a smaller student model, ensuring comparable performance with significantly reduced parameters.
EduAttention explores the cross-relationships between text features extracted by the teacher model and image features extracted by the student model.
arXiv Detail & Related papers (2024-08-08T01:12:21Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Explore In-Context Segmentation via Latent Diffusion Models [132.26274147026854]
latent diffusion model (LDM) is an effective minimalist for in-context segmentation.
We build a new and fair in-context segmentation benchmark that includes both image and video datasets.
arXiv Detail & Related papers (2024-03-14T17:52:31Z) - Ensemble knowledge distillation of self-supervised speech models [84.69577440755457]
Distilled self-supervised models have shown competitive performance and efficiency in recent years.
We performed Ensemble Knowledge Distillation (EKD) on various self-supervised speech models such as HuBERT, RobustHuBERT, and WavLM.
Our method improves the performance of the distilled models on four downstream speech processing tasks.
arXiv Detail & Related papers (2023-02-24T17:15:39Z) - Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks [53.09649785009528]
In this paper, we explore a paradigm that does not require training to obtain new models.
Similar to the birth of CNN inspired by receptive fields in the biological visual system, we propose Model Disassembling and Assembling.
For model assembling, we present the alignment padding strategy and parameter scaling strategy to construct a new model tailored for a specific task.
arXiv Detail & Related papers (2022-03-25T05:27:28Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Self-paced ensemble learning for speech and audio classification [19.39192082485334]
We propose a self-paced ensemble learning scheme in which models learn from each other over several iterations.
During the self-paced learning process, our ensemble also gains knowledge about the target domain.
Our empirical results indicate that SPEL significantly outperforms the baseline ensemble models.
arXiv Detail & Related papers (2021-03-22T16:34:06Z) - Automatic Expansion of Domain-Specific Affective Models for Web
Intelligence Applications [3.0012517171007755]
Sentic computing relies on well-defined affective models of different complexity.
The most granular affective model combined with sophisticated machine learning approaches may not fully capture an organisation's strategic positioning goals.
This article introduces expansion techniques for affective models, combining common and commonsense knowledge available in knowledge graphs with language models and affective reasoning.
arXiv Detail & Related papers (2021-02-01T13:32:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.