A Data-driven Neural Network Architecture for Sentiment Analysis
- URL: http://arxiv.org/abs/2006.16642v1
- Date: Tue, 30 Jun 2020 10:08:36 GMT
- Title: A Data-driven Neural Network Architecture for Sentiment Analysis
- Authors: Erion \c{C}ano and Maurizio Morisio
- Abstract summary: We present the creation steps of two big datasets of song emotions.
We also explore usage of convolution and max-pooling neural layers on song lyrics, product and movie review text datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fabulous results of convolution neural networks in image-related tasks,
attracted attention of text mining, sentiment analysis and other text analysis
researchers. It is however difficult to find enough data for feeding such
networks, optimize their parameters, and make the right design choices when
constructing network architectures. In this paper we present the creation steps
of two big datasets of song emotions. We also explore usage of convolution and
max-pooling neural layers on song lyrics, product and movie review text
datasets. Three variants of a simple and flexible neural network architecture
are also compared. Our intention was to spot any important patterns that can
serve as guidelines for parameter optimization of similar models. We also
wanted to identify architecture design choices which lead to high performing
sentiment analysis models. To this end, we conducted a series of experiments
with neural architectures of various configurations. Our results indicate that
parallel convolutions of filter lengths up to three are usually enough for
capturing relevant text features. Also, max-pooling region size should be
adapted to the length of text documents for producing the best feature maps.
Top results we got are obtained with feature maps of lengths 6 to 18. An
improvement on future neural network models for sentiment analysis, could be
generating sentiment polarity prediction of documents using aggregation of
predictions on smaller excerpt of the entire text.
Related papers
- Designing deep neural networks for driver intention recognition [40.87622566719826]
This paper applies neural architecture search to investigate the effects of the deep neural network architecture on a real-world safety critical application.
A set of eight search strategies are evaluated for two driver intention recognition datasets.
arXiv Detail & Related papers (2024-02-07T12:54:15Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - A Local Optima Network Analysis of the Feedforward Neural Architecture
Space [0.0]
Local optima network (LON) analysis is a derivative of the fitness landscape of candidate solutions.
LONs may provide a viable paradigm by which to analyse and optimise neural architectures.
arXiv Detail & Related papers (2022-06-02T08:09:17Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Improving Graph Neural Networks with Simple Architecture Design [7.057970273958933]
We introduce several key design strategies for graph neural networks.
We present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN)
We show that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks.
arXiv Detail & Related papers (2021-05-17T06:46:01Z) - Rethinking Graph Neural Network Search from Message-passing [120.62373472087651]
This paper proposes Graph Neural Architecture Search (GNAS) with novel-designed search space.
We design Graph Neural Architecture Paradigm (GAP) with tree-topology computation procedure and two types of fine-grained atomic operations.
Experiments show that our GNAS can search for better GNNs with multiple message-passing mechanisms and optimal message-passing depth.
arXiv Detail & Related papers (2021-03-26T06:10:41Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Deep Learning Approach for Enhanced Cyber Threat Indicators in Twitter
Stream [3.7354197654171797]
This work proposes a deep learning based approach for tweet data analysis.
To convert the tweets into numerical representations, various text representations are employed.
For comparative analysis, the classical text representation method with classical machine learning algorithm is employed.
arXiv Detail & Related papers (2020-03-31T00:29:42Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.