Intuitionistic Fuzzy Broad Learning System: Enhancing Robustness Against Noise and Outliers
- URL: http://arxiv.org/abs/2307.08713v2
- Date: Sat, 11 May 2024 18:59:20 GMT
- Title: Intuitionistic Fuzzy Broad Learning System: Enhancing Robustness Against Noise and Outliers
- Authors: M. Sajid, A. K. Malik, M. Tanveer,
- Abstract summary: We propose fuzzy broad learning system (F-BLS) and intuitionistic fuzzy broad learning system (IF-BLS) models.
We implement the proposed F-BLS and IF-BLS models to diagnose Alzheimer's disease (AD)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of data classification, broad learning system (BLS) has proven to be a potent tool that utilizes a layer-by-layer feed-forward neural network. However, the traditional BLS treats all samples as equally significant, which makes it less robust and less effective for real-world datasets with noises and outliers. To address this issue, we propose fuzzy broad learning system (F-BLS) and the intuitionistic fuzzy broad learning system (IF-BLS) models that confront challenges posed by the noise and outliers present in the dataset and enhance overall robustness. Employing a fuzzy membership technique, the proposed F-BLS model embeds sample neighborhood information based on the proximity of each class center within the inherent feature space of the BLS framework. Furthermore, the proposed IF-BLS model introduces intuitionistic fuzzy concepts encompassing membership, non-membership, and score value functions. IF-BLS strategically considers homogeneity and heterogeneity in sample neighborhoods in the kernel space. We evaluate the performance of proposed F-BLS and IF-BLS models on UCI benchmark datasets with and without Gaussian noise. As an application, we implement the proposed F-BLS and IF-BLS models to diagnose Alzheimer's disease (AD). Experimental findings and statistical analyses consistently highlight the superior generalization capabilities of the proposed F-BLS and IF-BLS models over baseline models across all scenarios. The proposed models offer a promising solution to enhance the BLS framework's ability to handle noise and outliers. The source code link of the proposed model is available at https://github.com/mtanveer1/IF-BLS.
Related papers
- Adaptive Dual-Weighting Framework for Federated Learning via Out-of-Distribution Detection [53.45696787935487]
Federated Learning (FL) enables collaborative model training across large-scale distributed service nodes.<n>In real-world service-oriented deployments, data generated by heterogeneous users, devices, and application scenarios are inherently non-IID.<n>We propose FLood, a novel FL framework inspired by out-of-distribution (OOD) detection.
arXiv Detail & Related papers (2026-02-01T05:54:59Z) - High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - Ensemble Deep Random Vector Functional Link Neural Network Based on Fuzzy Inference System [0.6437284704257459]
ensemble deep random vector functional link (edRVFL) neural network has demonstrated the ability to address the limitations of conventional artificial neural networks.
We propose a novel edRVFL based on fuzzy inference system (edRVFL-FIS) to enhance the feature learning capabilities of edRVFL.
arXiv Detail & Related papers (2024-06-02T17:01:44Z) - FedSI: Federated Subnetwork Inference for Efficient Uncertainty Quantification [31.11796234526461]
FedSI is a novel Bayesian DNNs-based subnetwork inference PFL framework.
It implements a client-specific subnetwork inference mechanism, selects network parameters with large variance, and fixes the rest as deterministic ones.
It achieves fast and scalable inference while preserving the systematic uncertainties to the fullest extent.
arXiv Detail & Related papers (2024-04-24T05:24:08Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Learning under Singularity: An Information Criterion improving WBIC and
sBIC [1.0878040851637998]
We introduce a novel Information Criterion (IC), termed Learning under Singularity (LS)
LS is effective without regularity constraints and demonstrates stability.
This approach offers a flexible and robust method for model selection, free from regularity constraints.
arXiv Detail & Related papers (2024-02-20T07:09:39Z) - Physics Inspired Hybrid Attention for SAR Target Recognition [61.01086031364307]
We propose a physics inspired hybrid attention (PIHA) mechanism and the once-for-all (OFA) evaluation protocol to address the issues.
PIHA leverages the high-level semantics of physical information to activate and guide the feature group aware of local semantics of target.
Our method outperforms other state-of-the-art approaches in 12 test scenarios with same ASC parameters.
arXiv Detail & Related papers (2023-09-27T14:39:41Z) - FaFCNN: A General Disease Classification Framework Based on Feature
Fusion Neural Networks [4.097623533226476]
We propose the Feature-aware Fusion Correlation Neural Network (FaFCNN), which introduces a feature-aware interaction module and a feature alignment module based on domain adversarial learning.
The experimental results show that training using augmented features obtained by pre-training gradient boosting decision tree yields more performance gains than random-forest based methods.
arXiv Detail & Related papers (2023-07-24T04:23:08Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv Detail & Related papers (2021-04-13T17:54:33Z) - Adaptive Name Entity Recognition under Highly Unbalanced Data [5.575448433529451]
We present our experiments on a neural architecture composed of a Conditional Random Field (CRF) layer stacked on top of a Bi-directional LSTM (BI-LSTM) layer for solving NER tasks.
We introduce an add-on classification model to split sentences into two different sets: Weak and Strong classes and then designing a couple of Bi-LSTM-CRF models properly to optimize performance on each set.
arXiv Detail & Related papers (2020-03-10T06:56:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.