Isopignistic Canonical Decomposition via Belief Evolution Network
- URL: http://arxiv.org/abs/2405.02653v2
- Date: Fri, 30 Aug 2024 12:52:31 GMT
- Title: Isopignistic Canonical Decomposition via Belief Evolution Network
- Authors: Qianli Zhou, Tianxiang Zhan, Yong Deng,
- Abstract summary: We propose an isopignistic transformation based on the belief evolution network.
This decomposition offers a reverse path between the possibility distribution and its isopignistic mass functions.
This paper establishes a theoretical basis for building general models of artificial intelligence based on probability theory, Dempster-Shafer theory, and possibility theory.
- Score: 12.459136964317942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing a general information processing model in uncertain environments is fundamental for the advancement of explainable artificial intelligence. Dempster-Shafer theory of evidence is a well-known and effective reasoning method for representing epistemic uncertainty, which is closely related to subjective probability theory and possibility theory. Although they can be transformed to each other under some particular belief structures, there remains a lack of a clear and interpretable transformation process, as well as a unified approach for information processing. In this paper, we aim to address these issues from the perspectives of isopignistic belief functions and the hyper-cautious transferable belief model. Firstly, we propose an isopignistic transformation based on the belief evolution network. This transformation allows for the adjustment of the information granule while retaining the potential decision outcome. The isopignistic transformation is integrated with a hyper-cautious transferable belief model to establish a new canonical decomposition. This decomposition offers a reverse path between the possibility distribution and its isopignistic mass functions. The result of the canonical decomposition, called isopignistic function, is an identical information content distribution to reflect the propensity and relative commitment degree of the BPA. Furthermore, this paper introduces a method to reconstruct the basic belief assignment by adjusting the isopignistic function. It explores the advantages of this approach in modeling and handling uncertainty within the hyper-cautious transferable belief model. More general, this paper establishes a theoretical basis for building general models of artificial intelligence based on probability theory, Dempster-Shafer theory, and possibility theory.
Related papers
- Transferable Belief Model on Quantum Circuits [18.733294090807995]
The transferable belief model is a semantic interpretation of Dempster-Shafer theory.
This paper introduces a new perspective on basic information representation for quantum AI models.
arXiv Detail & Related papers (2024-10-11T16:17:20Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.
We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.
We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - A Belief Model for Conflicting and Uncertain Evidence -- Connecting
Dempster-Shafer Theory and the Topology of Evidence [8.295493796476766]
We propose a new model for measuring degrees of beliefs based on possibly inconsistent, incomplete, and uncertain evidence.
We show that computing degrees of belief with this model is #P-complete in general.
arXiv Detail & Related papers (2023-06-06T09:30:48Z) - Causal Discovery in Heterogeneous Environments Under the Sparse
Mechanism Shift Hypothesis [7.895866278697778]
Machine learning approaches commonly rely on the assumption of independent and identically distributed (i.i.d.) data.
In reality, this assumption is almost always violated due to distribution shifts between environments.
We propose the Mechanism Shift Score (MSS), a score-based approach amenable to various empirical estimators.
arXiv Detail & Related papers (2022-06-04T15:39:30Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - The intersection probability: betting with probability intervals [7.655239948659381]
We propose the use of the intersection probability, a transform derived originally for belief functions in the framework of the geometric approach to uncertainty.
We outline a possible decision making framework for probability intervals, analogous to the Transferable Belief Model for belief functions.
arXiv Detail & Related papers (2022-01-05T17:35:06Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - No Substitute for Functionalism -- A Reply to 'Falsification &
Consciousness' [0.0]
This reply identifies avenues of expansion for the model proposed in [1], allowing us to distinguish between different types of variation.
Motivated by examples from neural networks, state machines and Turing machines, we will prove that substitutions do not exist for a very broad class of Level-1 functionalist theories.
arXiv Detail & Related papers (2020-05-28T08:12:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.