A Conceptual Framework For Trie-Augmented Neural Networks (TANNS)
- URL: http://arxiv.org/abs/2406.10270v1
- Date: Tue, 11 Jun 2024 17:08:16 GMT
- Title: A Conceptual Framework For Trie-Augmented Neural Networks (TANNS)
- Authors: Temitayo Adefemi,
- Abstract summary: Trie-Augmented Neural Networks (TANNs) combine trie structures with neural networks, forming a hierarchical design that enhances decision-making transparency and efficiency in machine learning.
This paper investigates the use of TANNs for text and document classification, applying Recurrent Neural Networks (RNNs) and Feed forward Neural Networks (FNNs)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trie-Augmented Neural Networks (TANNs) combine trie structures with neural networks, forming a hierarchical design that enhances decision-making transparency and efficiency in machine learning. This paper investigates the use of TANNs for text and document classification, applying Recurrent Neural Networks (RNNs) and Feed forward Neural Networks (FNNs). We evaluated TANNs on the 20 NewsGroup and SMS Spam Collection datasets, comparing their performance with traditional RNN and FFN Networks with and without dropout regularization. The results show that TANNs achieve similar or slightly better performance in text classification. The primary advantage of TANNs is their structured decision-making process, which improves interpretability. We discuss implementation challenges and practical limitations. Future work will aim to refine the TANNs architecture for more complex classification tasks.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Investigating Sparsity in Recurrent Neural Networks [0.0]
This thesis focuses on investigating the effects of pruning and Sparse Recurrent Neural Networks on the performance of RNNs.
We first describe the pruning of RNNs, its impact on the performance of RNNs, and the number of training epochs required to regain accuracy after the pruning is performed.
Next, we continue with the creation and training of Sparse Recurrent Neural Networks and identify the relation between the performance and the graph properties of its underlying arbitrary structure.
arXiv Detail & Related papers (2024-07-30T07:24:58Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Transforming to Yoked Neural Networks to Improve ANN Structure [0.0]
Most existing artificial neural networks (ANN) are designed as a tree structure to imitate neural networks.
We propose a model YNN to efficiently eliminate such structural bias.
In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information.
arXiv Detail & Related papers (2023-06-03T16:56:18Z) - SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking
Neural Networks [0.0]
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility.
There is no consensus on the best learning algorithm for SNNs.
In this paper, we propose SPENSER, a framework for SNN generation based on DENSER.
arXiv Detail & Related papers (2023-05-18T14:06:37Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Beyond Classification: Directly Training Spiking Neural Networks for
Semantic Segmentation [5.800785186389827]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
In this paper, we explore the SNN applications beyond classification and present semantic segmentation networks configured with spiking neurons.
arXiv Detail & Related papers (2021-10-14T21:53:03Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Neuroevolutionary Transfer Learning of Deep Recurrent Neural Networks
through Network-Aware Adaptation [57.46377517266827]
This work introduces network-aware adaptive structure transfer learning (N-ASTL)
N-ASTL utilizes statistical information related to the source network's topology and weight distribution to inform how new input and output neurons are to be integrated into the existing structure.
Results show improvements over prior state-of-the-art, including the ability to transfer in challenging real-world datasets not previously possible.
arXiv Detail & Related papers (2020-06-04T06:07:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.