Sequential three-way decisions with a single hidden layer feedforward
neural network
- URL: http://arxiv.org/abs/2303.07589v1
- Date: Tue, 14 Mar 2023 02:26:34 GMT
- Title: Sequential three-way decisions with a single hidden layer feedforward
neural network
- Authors: Youxi Wu, Shuhui Cheng, Yan Li, Rongjie Lv, Fan Min
- Abstract summary: Three-way decisions strategy has been employed to construct network topology in a single hidden layer feedforward neural network (SFNN)
This paper proposes STWD with an SFNN (STWD-SFNN) to enhance the performance of networks on structured datasets.
- Score: 5.943305068876161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The three-way decisions strategy has been employed to construct network
topology in a single hidden layer feedforward neural network (SFNN). However,
this model has a general performance, and does not consider the process costs,
since it has fixed threshold parameters. Inspired by the sequential three-way
decisions (STWD), this paper proposes STWD with an SFNN (STWD-SFNN) to enhance
the performance of networks on structured datasets. STWD-SFNN adopts
multi-granularity levels to dynamically learn the number of hidden layer nodes
from coarse to fine, and set the sequential threshold parameters. Specifically,
at the coarse granular level, STWD-SFNN handles easy-to-classify instances by
applying strict threshold conditions, and with the increasing number of hidden
layer nodes at the fine granular level, STWD-SFNN focuses more on disposing of
the difficult-to-classify instances by applying loose threshold conditions,
thereby realizing the classification of instances. Moreover, STWD-SFNN
considers and reports the process cost produced from each granular level. The
experimental results verify that STWD-SFNN has a more compact network on
structured datasets than other SFNN models, and has better generalization
performance than the competitive models. All models and datasets can be
downloaded from https://github.com/wuc567/Machine-learning/tree/main/STWD-SFNN.
Related papers
- Exploiting Label Skewness for Spiking Neural Networks in Federated Learning [18.846475069353364]
FedLEC incorporates intra-client label weight calibration to balance the learning intensity across local labels and inter-client knowledge distillation to mitigate local SNN model bias.
Compared to eight state-of-the-art FL algorithms, FedLEC achieves an average accuracy improvement of approximately 11.59% for the global SNN model.
arXiv Detail & Related papers (2024-12-23T05:52:32Z) - Entanglement Classification of Arbitrary Three-Qubit States via Artificial Neural Networks [2.715284063484557]
We design and implement artificial neural networks (ANNs) to detect and classify entanglement for three-qubit systems.
The models are trained and validated on a simulated dataset of randomly generated states.
Remarkably, we find that feeding only 7 diagonal elements of the density matrix into the ANN results in an accuracy greater than 94% for both tasks.
arXiv Detail & Related papers (2024-11-18T06:50:10Z) - Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets [3.0894823679470087]
This paper introduces the Multi-Stage Folding and Unshared Masks methods to expand the search space in terms of both architecture and parameters.
By achieving high sparsity, competitive performance, and high memory efficiency with up to 98.7% reduction, it demonstrates suitability for energy-efficient graph processing.
arXiv Detail & Related papers (2023-12-06T02:16:44Z) - Interpretable Neural Networks with Random Constructive Algorithm [3.1200894334384954]
This paper introduces an Interpretable Neural Network (INN) incorporating spatial information to tackle the opaque parameterization process of random weighted neural networks.
It devises a geometric relationship strategy using a pool of candidate nodes and established relationships to select node parameters conducive to network convergence.
arXiv Detail & Related papers (2023-07-01T01:07:20Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - Hyperspectral Classification Based on Lightweight 3-D-CNN With Transfer
Learning [67.40866334083941]
We propose an end-to-end 3-D lightweight convolutional neural network (CNN) for limited samples-based HSI classification.
Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost.
Our model achieves competitive performance for HSI classification compared to several state-of-the-art methods.
arXiv Detail & Related papers (2020-12-07T03:44:35Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.