ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for
Computational Humor
- URL: http://arxiv.org/abs/2004.12765v7
- Date: Thu, 1 Dec 2022 16:02:32 GMT
- Title: ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for
Computational Humor
- Authors: Issa Annamoradnejad and Gohar Zoghi
- Abstract summary: We propose a novel approach for detecting and rating humor in short texts based on a popular linguistic theory of humor.
The proposed technical method initiates by separating sentences of the given text and utilizing the BERT model to generate embeddings for each one.
We accompany the paper with a novel dataset for humor detection consisting of 200,000 formal short texts.
The proposed model obtained F1 scores of 0.982 and 0.869 in the humor detection experiments which outperform general and state-of-the-art models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automation of humor detection and rating has interesting use cases in modern
technologies, such as humanoid robots, chatbots, and virtual assistants. In
this paper, we propose a novel approach for detecting and rating humor in short
texts based on a popular linguistic theory of humor. The proposed technical
method initiates by separating sentences of the given text and utilizing the
BERT model to generate embeddings for each one. The embeddings are fed to
separate lines of hidden layers in a neural network (one line for each
sentence) to extract latent features. At last, the parallel lines are
concatenated to determine the congruity and other relationships between the
sentences and predict the target value. We accompany the paper with a novel
dataset for humor detection consisting of 200,000 formal short texts. In
addition to evaluating our work on the novel dataset, we participated in a live
machine learning competition focused on rating humor in Spanish tweets. The
proposed model obtained F1 scores of 0.982 and 0.869 in the humor detection
experiments which outperform general and state-of-the-art models. The
evaluation performed on two contrasting settings confirm the strength and
robustness of the model and suggests two important factors in achieving high
accuracy in the current task: 1) usage of sentence embeddings and 2) utilizing
the linguistic structure of humor in designing the proposed model.
Related papers
- CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment
of Coherence in Generated Texts [15.866519123942457]
We introduce sc CoheSentia, a novel benchmark of human-perceived coherence of automatically generated texts.
Our benchmark contains 500 automatically-generated and human-annotated paragraphs, each annotated in both methods.
Our analysis shows that the inter-annotator agreement in the incremental mode is higher than in the holistic alternative.
arXiv Detail & Related papers (2023-10-25T03:21:20Z) - MISMATCH: Fine-grained Evaluation of Machine-generated Text with
Mismatch Error Types [68.76742370525234]
We propose a new evaluation scheme to model human judgments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts.
Inspired by the recent efforts in several NLP tasks for fine-grained evaluation, we introduce a set of 13 mismatch error types.
We show that the mismatch errors between the sentence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.
arXiv Detail & Related papers (2023-06-18T01:38:53Z) - On the Possibilities of AI-Generated Text Detection [76.55825911221434]
We argue that as machine-generated text approximates human-like quality, the sample size needed for detection bounds increases.
We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero.
arXiv Detail & Related papers (2023-04-10T17:47:39Z) - Towards Multimodal Prediction of Spontaneous Humour: A Novel Dataset and First Results [84.37263300062597]
Humor is a substantial element of human social behavior, affect, and cognition.
Current methods of humor detection have been exclusively based on staged data, making them inadequate for "real-world" applications.
We contribute to addressing this deficiency by introducing the novel Passau-Spontaneous Football Coach Humor dataset, comprising about 11 hours of recordings.
arXiv Detail & Related papers (2022-09-28T17:36:47Z) - Integrating extracted information from bert and multiple embedding
methods with the deep neural network for humour detection [3.612189440297043]
We propose a framework for humour detection in short texts taken from news headlines.
Our proposed framework (IBEN) attempts to extract information from written text via the use of different layers of BERT.
The extracted information was then sent to a Bi-GRU neural network as an embedding matrix.
arXiv Detail & Related papers (2021-05-11T15:09:19Z) - Explaining Neural Network Predictions on Sentence Pairs via Learning
Word-Group Masks [21.16662651409811]
We propose the Group Mask (GMASK) method to implicitly detect word correlations by grouping correlated words from the input text pair together.
The proposed method is evaluated with two different model architectures (decomposable attention model and BERT) across four datasets.
arXiv Detail & Related papers (2021-04-09T17:14:34Z) - Dutch Humor Detection by Generating Negative Examples [5.888646114353371]
Humor detection is usually modeled as a binary classification task, trained to predict if the given text is a joke or another type of text.
We propose using text generation algorithms for imitating the original joke dataset to increase the difficulty for the learning algorithm.
We compare the humor detection capabilities of classic neural network approaches with the state-of-the-art Dutch language model RobBERT.
arXiv Detail & Related papers (2020-10-26T15:15:10Z) - Explicit Alignment Objectives for Multilingual Bidirectional Encoders [111.65322283420805]
We present a new method for learning multilingual encoders, AMBER (Aligned Multilingual Bi-directional EncodeR)
AMBER is trained on additional parallel data using two explicit alignment objectives that align the multilingual representations at different granularities.
Experimental results show that AMBER obtains gains of up to 1.1 average F1 score on sequence tagging and up to 27.3 average accuracy on retrieval over the XLMR-large model.
arXiv Detail & Related papers (2020-10-15T18:34:13Z) - Improving Text Generation with Student-Forcing Optimal Transport [122.11881937642401]
We propose using optimal transport (OT) to match the sequences generated in training and testing modes.
An extension is also proposed to improve the OT learning, based on the structural and contextual information of the text sequences.
The effectiveness of the proposed method is validated on machine translation, text summarization, and text generation tasks.
arXiv Detail & Related papers (2020-10-12T19:42:25Z) - Predicting the Humorousness of Tweets Using Gaussian Process Preference
Learning [56.18809963342249]
We present a probabilistic approach that learns to rank and rate the humorousness of short texts by exploiting human preference judgments and automatically sourced linguistic annotations.
We report system performance for the campaign's two subtasks, humour detection and funniness score prediction, and discuss some issues arising from the conversion between the numeric scores used in the HAHA@IberLEF 2019 data and the pairwise judgment annotations required for our method.
arXiv Detail & Related papers (2020-08-03T13:05:42Z) - Generating Hierarchical Explanations on Text Classification via Feature
Interaction Detection [21.02924712220406]
We build hierarchical explanations by detecting feature interactions.
Such explanations visualize how words and phrases are combined at different levels of the hierarchy.
Experiments show the effectiveness of the proposed method in providing explanations both faithful to models and interpretable to humans.
arXiv Detail & Related papers (2020-04-04T20:56:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.