Attribute Fusion-based Classifier on Framework of Belief Structure
- URL: http://arxiv.org/abs/2509.00754v2
- Date: Mon, 06 Oct 2025 18:02:55 GMT
- Title: Attribute Fusion-based Classifier on Framework of Belief Structure
- Authors: Qiying Hu, Yingying Liang, Qianli Zhou, Witold Pedrycz,
- Abstract summary: Dempster-Shafer Theory (DST) provides a powerful framework for modeling uncertainty and has been widely applied to multi-attribute classification tasks.<n>Traditional DST-based attribute fusion-based classifiers suffer from oversimplified membership function modeling and limited exploitation of the belief structure brought by basic probability assignment (BPA)<n>This paper presents an enhanced attribute fusion-based classifier that addresses these limitations through two key innovations.
- Score: 46.24928730489845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dempster-Shafer Theory (DST) provides a powerful framework for modeling uncertainty and has been widely applied to multi-attribute classification tasks. However, traditional DST-based attribute fusion-based classifiers suffer from oversimplified membership function modeling and limited exploitation of the belief structure brought by basic probability assignment (BPA), reducing their effectiveness in complex real-world scenarios. This paper presents an enhanced attribute fusion-based classifier that addresses these limitations through two key innovations. First, we adopt a selective modeling strategy that utilizes both single Gaussian and Gaussian Mixture Models (GMMs) for membership function construction, with model selection guided by cross-validation and a tailored evaluation metric. Second, we introduce a novel method to transform the possibility distribution into a BPA by combining simple BPAs derived from normalized possibility distributions, enabling a much richer and more flexible representation of uncertain information. Furthermore, we apply the belief structure-based BPA generation method to the evidential K-Nearest Neighbors (EKNN) classifier, enhancing its ability to incorporate uncertainty information into decision-making. Comprehensive experiments on benchmark datasets are conducted to evaluate the performance of the proposed attribute fusion-based classifier and the enhanced evidential K-Nearest Neighbors classifier in comparison with both evidential classifiers and conventional machine learning classifiers. The results demonstrate that the proposed classifier outperforms the best existing evidential classifier, achieving an average accuracy improvement of 4.86%, while maintaining low variance, thus confirming its superior effectiveness and robustness.
Related papers
- CA-AFP: Cluster-Aware Adaptive Federated Pruning [1.345821655503426]
Federated Learning (FL) faces major challenges in real-world deployments due to statistical heterogeneity across clients.<n>We propose CA-AFP, a unified framework that jointly addresses both challenges by performing cluster-specific model pruning.<n>We evaluate CA-AFP on two widely used human activity recognition benchmarks, UCI HAR and WISDM, under natural user-based federated partitions.
arXiv Detail & Related papers (2026-03-02T11:04:25Z) - BOFA: Bridge-Layer Orthogonal Low-Rank Fusion for CLIP-Based Class-Incremental Learning [84.56022893225422]
Class-Incremental Learning (CIL) aims to continually learn new categories without forgetting previously acquired knowledge.<n>Applying vision-language models such as CLIP to CIL poses two major challenges: (1) adapting to downstream tasks often requires additional learnable modules, increasing model complexity and susceptibility to forgetting; and (2) while multi-modal representations offer complementary strengths, existing methods have yet to fully realize their potential in effectively integrating visual and textual modalities.
arXiv Detail & Related papers (2025-11-14T15:51:40Z) - Your VAR Model is Secretly an Efficient and Explainable Generative Classifier [19.629406299980463]
We propose a novel generative model built on recent advances in visual autoregressive modeling.<n>We show that the VAR-based method fundamentally different properties from diffusion-based methods.<n>In particular, due to its tractable likelihood, the VAR-based classifier enables visual explainability via tokenwise mutual information.
arXiv Detail & Related papers (2025-10-14T01:59:01Z) - Bayesian Test-time Adaptation for Object Recognition and Detection with Vision-language Models [86.53246292425699]
We present BCA+, a training-free framework for TTA for both object recognition and detection.<n>We formulate adaptation as a Bayesian inference problem, where final predictions are generated by fusing the initial VLM output with a cache-based prediction.<n>BCA+ achieves state-of-the-art performance on both recognition and detection benchmarks.
arXiv Detail & Related papers (2025-10-03T06:27:33Z) - Discrete Markov Bridge [93.64996843697278]
We propose a novel framework specifically designed for discrete representation learning, called Discrete Markov Bridge.<n>Our approach is built upon two key components: Matrix Learning and Score Learning.
arXiv Detail & Related papers (2025-05-26T09:32:12Z) - Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing [58.52119063742121]
Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance.<n>This paper addresses the question of how to optimally combine the model's predictions and the provided labels.<n>Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels.
arXiv Detail & Related papers (2025-05-21T07:16:44Z) - Confidence-aware Contrastive Learning for Selective Classification [20.573658672018066]
This work provides a generalization bound for selective classification, disclosing that optimizing feature layers helps improve the performance of selective classification.
Inspired by this theory, we propose to explicitly improve the selective classification model at the feature level for the first time, leading to a novel Confidence-aware Contrastive Learning method for Selective Classification, CCL-SC.
arXiv Detail & Related papers (2024-06-07T08:43:53Z) - GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - Bayesian Exploration of Pre-trained Models for Low-shot Image Classification [14.211305168954594]
This work proposes a simple and effective probabilistic model ensemble framework based on Gaussian processes.
We achieve the integration of prior knowledge by specifying the mean function with CLIP and the kernel function.
We demonstrate that our method consistently outperforms competitive ensemble baselines regarding predictive performance.
arXiv Detail & Related papers (2024-03-30T10:25:28Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - A Normative Model of Classifier Fusion [4.111899441919164]
We present a hierarchical Bayesian model of probabilistic classification fusion based on a new correlated Dirichlet distribution.
The proposed model naturally accommodates the classic Independent Opinion Pool and other independent fusion algorithms as special cases.
It is evaluated by uncertainty reduction and correctness of fusion on synthetic and real-world data sets.
arXiv Detail & Related papers (2021-06-03T11:52:13Z) - Explaining a Series of Models by Propagating Local Feature Attributions [9.66840768820136]
Pipelines involving several machine learning models improve performance in many domains but are difficult to understand.
We introduce a framework to propagate local feature attributions through complex pipelines of models based on a connection to the Shapley value.
Our framework enables us to draw higher-level conclusions based on groups of gene expression features for Alzheimer's and breast cancer histologic grade prediction.
arXiv Detail & Related papers (2021-04-30T22:20:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.