CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification
- URL: http://arxiv.org/abs/2004.12389v1
- Date: Sun, 26 Apr 2020 14:08:15 GMT
- Title: CrowdTSC: Crowd-based Neural Networks for Text Sentiment Classification
- Authors: Keyu Yang, Yunjun Gao, Lei Liang, Song Bian, Lu Chen, Baihua Zheng
- Abstract summary: We propose Crowd-based neural networks for Text Sentiment Classification (CrowdTSC)
We design and post the questions on a crowdsourcing platform to collect the keywords in texts.
We present an attention-based neural network and a hybrid neural network, which incorporate the collected keywords as human beings' guidance.
- Score: 26.75362694599748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentiment classification is a fundamental task in content analysis. Although
deep learning has demonstrated promising performance in text classification
compared with shallow models, it is still not able to train a satisfying
classifier for text sentiment. Human beings are more sophisticated than machine
learning models in terms of understanding and capturing the emotional
polarities of texts. In this paper, we leverage the power of human intelligence
into text sentiment classification. We propose Crowd-based neural networks for
Text Sentiment Classification (CrowdTSC for short). We design and post the
questions on a crowdsourcing platform to collect the keywords in texts.
Sampling and clustering are utilized to reduce the cost of crowdsourcing. Also,
we present an attention-based neural network and a hybrid neural network, which
incorporate the collected keywords as human being's guidance into deep neural
networks. Extensive experiments on public datasets confirm that CrowdTSC
outperforms state-of-the-art models, justifying the effectiveness of
crowd-based keyword guidance.
Related papers
- A Simple Attention-Based Mechanism for Bimodal Emotion Classification [0.0]
We present novel bimodal deep learning-based architectures enhanced with attention mechanism trained and tested on text and speech data for emotion classification.
Our finding suggests that deep learning based architectures trained on different types of data (text and speech) outperform architectures trained only on text or speech.
arXiv Detail & Related papers (2024-06-28T10:43:02Z) - Saliency Suppressed, Semantics Surfaced: Visual Transformations in Neural Networks and the Brain [0.0]
We take inspiration from neuroscience to shed light on how neural networks encode information at low (visual saliency) and high (semantic similarity) levels of abstraction.
We find that ResNets are more sensitive to saliency information than ViTs, when trained with object classification objectives.
We show that semantic encoding is a key factor in aligning AI with human visual perception, while saliency suppression is a non-brain-like strategy.
arXiv Detail & Related papers (2024-04-29T15:05:42Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - TeKo: Text-Rich Graph Neural Networks with External Knowledge [75.91477450060808]
We propose a novel text-rich graph neural network with external knowledge (TeKo)
We first present a flexible heterogeneous semantic network that incorporates high-quality entities.
We then introduce two types of external knowledge, that is, structured triplets and unstructured entity description.
arXiv Detail & Related papers (2022-06-15T02:33:10Z) - Khmer Text Classification Using Word Embedding and Neural Networks [0.0]
We discuss various classification approaches for Khmer text.
A Khmer word embedding model is trained on a 30-million-Khmer-word corpus to construct word vector representations.
We evaluate the performance of different approaches on a news article dataset for both multi-class and multi-label text classification tasks.
arXiv Detail & Related papers (2021-12-13T15:57:32Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - Computing Class Hierarchies from Classifiers [12.631679928202516]
We propose a novel algorithm for automatically acquiring a class hierarchy from a neural network.
Our algorithm produces surprisingly good hierarchies for some well-known deep neural network models.
arXiv Detail & Related papers (2021-12-02T13:01:04Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Short Text Classification via Knowledge powered Attention with
Similarity Matrix based CNN [6.6723692875904375]
We propose a knowledge powered attention with similarity matrix based convolutional neural network (KASM) model.
We use knowledge graph (KG) to enrich the semantic representation of short text, specially, the information of parent-entity is introduced in our model.
For the purpose of measuring the importance of knowledge, we introduce the attention mechanisms to choose the important information.
arXiv Detail & Related papers (2020-02-09T12:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.