A Self-Constructing Multi-Expert Fuzzy System for High-dimensional Data Classification
- URL: http://arxiv.org/abs/2410.13390v1
- Date: Thu, 17 Oct 2024 09:41:54 GMT
- Title: A Self-Constructing Multi-Expert Fuzzy System for High-dimensional Data Classification
- Authors: Yingtao Ren, Yu-Cheng Chang, Thomas Do, Zehong Cao, Chin-Teng Lin,
- Abstract summary: Fuzzy Neural Networks (FNNs) are effective machine learning models for classification tasks.
We propose a novel fuzzy system, the Self-Constructing Multi-Expert Fuzzy System (SOME-FS)
It combines two learning strategies: mixed structure learning and multi-expert advanced learning.
- Score: 33.926721742862156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fuzzy Neural Networks (FNNs) are effective machine learning models for classification tasks, commonly based on the Takagi-Sugeno-Kang (TSK) fuzzy system. However, when faced with high-dimensional data, especially with noise, FNNs encounter challenges such as vanishing gradients, excessive fuzzy rules, and limited access to prior knowledge. To address these challenges, we propose a novel fuzzy system, the Self-Constructing Multi-Expert Fuzzy System (SOME-FS). It combines two learning strategies: mixed structure learning and multi-expert advanced learning. The former enables each base classifier to effectively determine its structure without requiring prior knowledge, while the latter tackles the issue of vanishing gradients by enabling each rule to focus on its local region, thereby enhancing the robustness of the fuzzy classifiers. The overall ensemble architecture enhances the stability and prediction performance of the fuzzy system. Our experimental results demonstrate that the proposed SOME-FS is effective in high-dimensional tabular data, especially in dealing with uncertainty. Moreover, our stable rule mining process can identify concise and core rules learned by the SOME-FS.
Related papers
- DUKAE: DUal-level Knowledge Accumulation and Ensemble for Pre-Trained Model-Based Continual Learning [19.684132921720945]
Pre-trained model-based continual learning (PTMCL) has garnered growing attention, as it enables more rapid acquisition of new knowledge.
We propose a method named DUal-level Knowledge Accumulation and Ensemble (DUKAE) that leverages both feature-level and decision-level knowledge accumulation.
Experiments on CIFAR-100, ImageNet-R, CUB-200, and Cars-196 datasets demonstrate the superior performance of our approach.
arXiv Detail & Related papers (2025-04-09T01:40:38Z) - Towards Foundational Models for Dynamical System Reconstruction: Hierarchical Meta-Learning via Mixture of Experts [0.7373617024876724]
We introduce MixER: Mixture of Expert Reconstructors, a novel sparse top-1 MoE layer employing a custom gating update algorithm based on $K$-means and least squares.
Experiments validate MixER's capabilities, demonstrating efficient training and scalability to systems of up to ten ordinary parametric differential equations.
Our layer underperforms state-of-the-art meta-learners in high-data regimes, particularly when each expert is constrained to process only a fraction of a dataset composed of highly related data points.
arXiv Detail & Related papers (2025-02-07T21:16:43Z) - Continuous Knowledge-Preserving Decomposition for Few-Shot Continual Learning [80.31842748505895]
Few-shot class-incremental learning (FSCIL) involves learning new classes from limited data while retaining prior knowledge.
We propose Continuous Knowledge-Preserving Decomposition for FSCIL (CKPD-FSCIL), a framework that decomposes a model's weights into two parts.
Experiments on multiple benchmarks show that CKPD-FSCIL outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-09T07:18:48Z) - High-Performance Few-Shot Segmentation with Foundation Models: An Empirical Study [64.06777376676513]
We develop a few-shot segmentation (FSS) framework based on foundation models.
To be specific, we propose a simple approach to extract implicit knowledge from foundation models to construct coarse correspondence.
Experiments on two widely used datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-10T08:04:11Z) - Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained
Ship Classification [62.425462136772666]
Fine-grained ship classification in remote sensing (RS-FGSC) poses a significant challenge due to the high similarity between classes and the limited availability of labeled data.
Recent advancements in large pre-trained Vision-Language Models (VLMs) have demonstrated impressive capabilities in few-shot or zero-shot learning.
This study delves into harnessing the potential of VLMs to enhance classification accuracy for unseen ship categories.
arXiv Detail & Related papers (2024-03-13T05:48:58Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Unsupervised Spatial-Temporal Feature Enrichment and Fidelity
Preservation Network for Skeleton based Action Recognition [20.07820929037547]
Unsupervised skeleton based action recognition has achieved remarkable progress recently.
Existing unsupervised learning methods suffer from severe overfitting problem.
This paper presents an Unsupervised spatial-temporal Feature Enrichment and Fidelity Preservation framework to generate rich distributed features.
arXiv Detail & Related papers (2024-01-25T09:24:07Z) - Towards Understanding Mixture of Experts in Deep Learning [95.27215939891511]
We study how the MoE layer improves the performance of neural network learning.
Our results suggest that the cluster structure of the underlying problem and the non-linearity of the expert are pivotal to the success of MoE.
arXiv Detail & Related papers (2022-08-04T17:59:10Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Hyperspherical embedding for novel class classification [1.5952956981784217]
We present a constraint-based approach applied to representations in the latent space under the normalized softmax loss.
We experimentally validate the proposed approach for the classification of unseen classes on different datasets using both metric learning and the normalized softmax loss.
Our results show that not only our proposed strategy can be efficiently trained on larger set of classes, as it does not require pairwise learning, but also present better classification results than the metric learning strategies.
arXiv Detail & Related papers (2021-02-05T15:42:13Z) - Feature Selection Based on Sparse Neural Network Layer with Normalizing
Constraints [0.0]
We propose new neural-network based feature selection approach that introduces two constrains, the satisfying of which leads to sparse FS layer.
The results confirm that proposed Feature Selection Based on Sparse Neural Network Layer with Normalizing Constraints (SNEL-FS) is able to select the important features and yields superior performance compared to other conventional FS methods.
arXiv Detail & Related papers (2020-12-11T14:14:33Z) - Capturing scattered discriminative information using a deep architecture
in acoustic scene classification [49.86640645460706]
In this study, we investigate various methods to capture discriminative information and simultaneously mitigate the overfitting problem.
We adopt a max feature map method to replace conventional non-linear activations in a deep neural network.
Two data augment methods and two deep architecture modules are further explored to reduce overfitting and sustain the system's discriminative power.
arXiv Detail & Related papers (2020-07-09T08:32:06Z) - Fuzzy c-Means Clustering for Persistence Diagrams [42.1666496315913]
We extend the ubiquitous Fuzzy c-Means (FCM) clustering algorithm to the space of persistence diagrams.
We show that our algorithm captures the topological structure of data without the topological prior knowledge.
In materials science, we classify transformed lattice structure datasets for the first time.
arXiv Detail & Related papers (2020-06-04T11:45:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.