Rule-Based Modeling of Low-Dimensional Data with PCA and Binary Particle Swarm Optimization (BPSO) in ANFIS
- URL: http://arxiv.org/abs/2502.03895v1
- Date: Thu, 06 Feb 2025 09:13:55 GMT
- Title: Rule-Based Modeling of Low-Dimensional Data with PCA and Binary Particle Swarm Optimization (BPSO) in ANFIS
- Authors: Afnan Al-Ali, Uvais Qidwai,
- Abstract summary: Fuzzy rule-based systems interpret data in low-dimensional domains, providing transparency and interpretability.
Deep learning excels in complex tasks but is prone to overfitting in sparse, unstructured, or low-dimensional data.
This interpretability is crucial in fields like healthcare and finance.
- Score: 0.29465623430708915
- License:
- Abstract: Fuzzy rule-based systems interpret data in low-dimensional domains, providing transparency and interpretability. In contrast, deep learning excels in complex tasks like image and speech recognition but is prone to overfitting in sparse, unstructured, or low-dimensional data. This interpretability is crucial in fields like healthcare and finance. Traditional rule-based systems, especially ANFIS with grid partitioning, suffer from exponential rule growth as dimensionality increases. We propose a strategic rule-reduction model that applies Principal Component Analysis (PCA) on normalized firing strengths to obtain linearly uncorrelated components. Binary Particle Swarm Optimization (BPSO) selectively refines these components, significantly reducing the number of rules while preserving precision in decision-making. A custom parameter update mechanism fine-tunes specific ANFIS layers by dynamically adjusting BPSO parameters, avoiding local minima. We validated our approach on standard UCI respiratory, keel classification, regression datasets, and a real-world ischemic stroke dataset, demonstrating adaptability and practicality. Results indicate fewer rules, shorter training, and high accuracy, underscoring the methods effectiveness for low-dimensional interpretability and complex data scenarios. This synergy of fuzzy logic and optimization fosters robust solutions. Our method contributes a powerful framework for interpretable AI in multiple domains. It addresses dimensionality, ensuring a rule base.
Related papers
- Robust PCA Based on Adaptive Weighted Least Squares and Low-Rank Matrix Factorization [2.983818075226378]
We propose a novel RPCA model that integrates adaptive weight factor update during initial component instability.
Our method outperforms existing non-inspired regularization approaches, offering superior performance and efficiency.
arXiv Detail & Related papers (2024-12-19T08:31:42Z) - Inferring Dynamic Networks from Marginals with Iterative Proportional Fitting [57.487936697747024]
A common network inference problem, arising from real-world data constraints, is how to infer a dynamic network from its time-aggregated adjacency matrix.
We introduce a principled algorithm that guarantees IPF converges under minimal changes to the network structure.
arXiv Detail & Related papers (2024-02-28T20:24:56Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Variable Importance in High-Dimensional Settings Requires Grouping [19.095605415846187]
Conditional Permutation Importance (CPI) bypasses PI's limitations in such cases.
Grouping variables statistically via clustering or some prior knowledge gains some power back.
We show that the approach extended with stacking controls the type-I error even with highly-correlated groups.
arXiv Detail & Related papers (2023-12-18T00:21:47Z) - Dynamically configured physics-informed neural network in topology
optimization applications [4.403140515138818]
The physics-informed neural network (PINN) can avoid generating enormous amounts of data when solving forward problems.
A dynamically configured PINN-based topology optimization (DCPINN-TO) method is proposed.
The accuracy of the displacement prediction and optimization results indicate that the DCPINN-TO method is effective and efficient.
arXiv Detail & Related papers (2023-12-12T05:35:30Z) - Physics-Informed Neural Networks for Material Model Calibration from
Full-Field Displacement Data [0.0]
We propose PINNs for the calibration of models from full-field displacement and global force data in a realistic regime.
We demonstrate that the enhanced PINNs are capable of identifying material parameters from both experimental one-dimensional data and synthetic full-field displacement data.
arXiv Detail & Related papers (2022-12-15T11:01:32Z) - Optimal Transport Based Refinement of Physics-Informed Neural Networks [0.0]
We propose a refinement strategy to the well-known Physics-Informed Neural Networks (PINNs) for solving partial differential equations (PDEs) based on the concept of Optimal Transport (OT)
PINNs solvers have been found to suffer from a host of issues: spectral bias in fully-connected pathologies, unstable gradient, and difficulties with convergence and accuracy.
We present a novel training strategy for solving the Fokker-Planck-Kolmogorov Equation (FPKE) using OT-based sampling to supplement the existing PINNs framework.
arXiv Detail & Related papers (2021-05-26T02:51:20Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - NSL: Hybrid Interpretable Learning From Noisy Raw Data [66.15862011405882]
This paper introduces a hybrid neural-symbolic learning framework, called NSL, that learns interpretable rules from labelled unstructured data.
NSL combines pre-trained neural networks for feature extraction with FastLAS, a state-of-the-art ILP system for rule learning under the answer set semantics.
We demonstrate that NSL is able to learn robust rules from MNIST data and achieve comparable or superior accuracy when compared to neural network and random forest baselines.
arXiv Detail & Related papers (2020-12-09T13:02:44Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.