Combinatorial Convolutional Neural Networks for Words
- URL: http://arxiv.org/abs/2303.16211v1
- Date: Tue, 28 Mar 2023 07:49:06 GMT
- Title: Combinatorial Convolutional Neural Networks for Words
- Authors: Karen Sargsyan
- Abstract summary: We argue that the identification of such patterns may be important for certain applications.
We argue that the identification of such patterns may be important for certain applications.
We present a convolutional neural network for word classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The paper discusses the limitations of deep learning models in identifying
and utilizing features that remain invariant under a bijective transformation
on the data entries, which we refer to as combinatorial patterns. We argue that
the identification of such patterns may be important for certain applications
and suggest providing neural networks with information that fully describes the
combinatorial patterns of input entries and allows the network to determine
what is relevant for prediction. To demonstrate the feasibility of this
approach, we present a combinatorial convolutional neural network for word
classification.
Related papers
- Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations [0.0]
We present an approach to the precise and global verification and explanation of Rectifier Neural Networks.
Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures.
arXiv Detail & Related papers (2023-01-19T11:35:07Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Inference Graphs for CNN Interpretation [12.765543440576144]
Convolutional neural networks (CNNs) have achieved superior accuracy in many visual related tasks.
We propose to model the network hidden layers activity using probabilistic models.
We show that such graphs are useful for understanding the general inference process of a class, as well as explaining decisions the network makes regarding specific images.
arXiv Detail & Related papers (2021-10-20T13:56:09Z) - From Common Sense Reasoning to Neural Network Models through Multiple
Preferences: an overview [0.0]
We discuss the relationships between conditional and preferential logics and neural network models.
We propose a concept-wise multipreference semantics, recently introduced for defeasible description logics.
The paper describes the general approach, through the cases of Self-Organising Maps and Multilayer Perceptrons.
arXiv Detail & Related papers (2021-07-10T16:25:19Z) - Interpretable Neural Networks based classifiers for categorical inputs [0.0]
We introduce a simple way to interpret the output function of a neural network classifier that take as input categorical variables.
We show that in these cases each layer of the network, and the logits layer in particular, can be expanded as a sum of terms that account for the contribution to the classification of each input pattern.
The analysis of the contributions of each pattern, after an appropriate gauge transformation, is presented in two cases where the effectiveness of the method can be appreciated.
arXiv Detail & Related papers (2021-02-05T14:38:50Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Investigating the Compositional Structure Of Deep Neural Networks [1.8899300124593645]
We introduce a novel theoretical framework based on the compositional structure of piecewise linear activation functions.
It is possible to characterize the instances of the input data with respect to both the predicted label and the specific (linear) transformation used to perform predictions.
Preliminary tests on the MNIST dataset show that our method can group input instances with regard to their similarity in the internal representation of the neural network.
arXiv Detail & Related papers (2020-02-17T14:16:17Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.