kdehumor at semeval-2020 task 7: a neural network model for detecting
funniness in dataset humicroedit
- URL: http://arxiv.org/abs/2105.05135v1
- Date: Tue, 11 May 2021 15:44:03 GMT
- Title: kdehumor at semeval-2020 task 7: a neural network model for detecting
funniness in dataset humicroedit
- Authors: Rida Miraj, Masaki Aono
- Abstract summary: Team KdeHumor employs recurrent neural network models including Bi-Directional LSTMs (BiLSTMs)
We analyze the performance of our method and demonstrate the contribution of each component of our architecture.
- Score: 3.612189440297043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes our contribution to SemEval-2020 Task 7: Assessing Humor
in Edited News Headlines. Here we present a method based on a deep neural
network. In recent years, quite some attention has been devoted to humor
production and perception. Our team KdeHumor employs recurrent neural network
models including Bi-Directional LSTMs (BiLSTMs). Moreover, we utilize the
state-of-the-art pre-trained sentence embedding techniques. We analyze the
performance of our method and demonstrate the contribution of each component of
our architecture.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Bio-realistic Neural Network Implementation on Loihi 2 with Izhikevich
Neurons [0.10499611180329801]
We presented a bio-realistic basal ganglia neural network and its integration into Intel's Loihi neuromorphic processor to perform simple Go/No-Go task.
We used Izhikevich neuron model, implemented as microcode, instead of Leaky-Integrate and Fire neuron model that has built-in support on Loihi.
arXiv Detail & Related papers (2023-07-21T18:28:17Z) - Learning Personal Representations from fMRIby Predicting Neurofeedback
Performance [52.77024349608834]
We present a deep neural network method for learning a personal representation for individuals performing a self neuromodulation task, guided by functional MRI (fMRI)
The representation is learned by a self-supervised recurrent neural network, that predicts the Amygdala activity in the next fMRI frame given recent fMRI frames and is conditioned on the learned individual representation.
arXiv Detail & Related papers (2021-12-06T10:16:54Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - JokeMeter at SemEval-2020 Task 7: Convolutional humor [6.853018135783218]
This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7.
The system is based on convolutional neural network architecture.
arXiv Detail & Related papers (2020-08-25T14:27:58Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Synaptic Metaplasticity in Binarized Neural Networks [4.243926243206826]
Deep neural networks are prone to catastrophic forgetting upon training a new task.
We propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data.
This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems.
arXiv Detail & Related papers (2020-03-07T08:09:34Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.