Structuralist analysis for neural network system diagrams
- URL: http://arxiv.org/abs/2104.14810v1
- Date: Fri, 30 Apr 2021 07:50:19 GMT
- Title: Structuralist analysis for neural network system diagrams
- Authors: Guy Clarke Marshall and Caroline Jay and Andre Freitas
- Abstract summary: We argue that the heterogeneous diagrammatic notations used for neural network systems has implications for signification in this domain.
We use a corpus analysis to quantitatively cluster diagrams according to the author's representational choices.
- Score: 6.233820957059352
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This short paper examines diagrams describing neural network systems in
academic conference proceedings. Many aspects of scholarly communication are
controlled, particularly with relation to text and formatting, but often
diagrams are not centrally curated beyond a peer review. Using a corpus-based
approach, we argue that the heterogeneous diagrammatic notations used for
neural network systems has implications for signification in this domain. We
divide this into (i) what content is being represented and (ii) how relations
are encoded. Using a novel structuralist framework, we use a corpus analysis to
quantitatively cluster diagrams according to the author's representational
choices. This quantitative diagram classification in a heterogeneous domain may
provide a foundation for further analysis.
Related papers
- Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Classification of vertices on social networks by multiple approaches [1.370151489527964]
In the case of social networks, it is crucial to evaluate the labels of discrete communities.
For each of these interaction-based entities, a social graph, a mailing dataset, and two citation sets are selected as the testbench repositories.
This paper was not only assessed the most valuable method but also determined how graph neural networks work.
arXiv Detail & Related papers (2023-01-13T09:42:55Z) - VisGraphNet: a complex network interpretation of convolutional neural
features [6.50413414010073]
We propose and investigate the use of visibility graphs to model the feature map of a neural network.
The work is motivated by an alternative viewpoint provided by these graphs over the original data.
arXiv Detail & Related papers (2021-08-27T20:21:04Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z) - How Researchers Use Diagrams in Communicating Neural Network Systems [5.064404027153093]
This paper reports on a study into the use of neural network system diagrams.
We find high diversity of usage, perception and preference in both creation and interpretation of diagrams.
Considering the interview data alongside existing guidance, we propose guidelines aiming to improve the way in which neural network system diagrams are constructed.
arXiv Detail & Related papers (2020-08-28T10:21:03Z) - Semantic Sentiment Analysis Based on Probabilistic Graphical Models and
Recurrent Neural Network [0.0]
The purpose of this study is to investigate the use of semantics to perform sentiment analysis based on probabilistic graphical models and recurrent neural networks.
The datasets used for the experiments were IMDB movie reviews, Amazon Consumer Product reviews, and Twitter Review datasets.
arXiv Detail & Related papers (2020-08-06T11:59:00Z) - Heterogeneous Graph Neural Networks for Extractive Document
Summarization [101.17980994606836]
Cross-sentence relations are a crucial step in extractive document summarization.
We present a graph-based neural network for extractive summarization (HeterSumGraph)
We introduce different types of nodes into graph-based neural networks for extractive document summarization.
arXiv Detail & Related papers (2020-04-26T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.