A Negation Quantum Decision Model to Predict the Interference Effect in
Categorization
- URL: http://arxiv.org/abs/2104.09058v1
- Date: Mon, 19 Apr 2021 05:30:00 GMT
- Title: A Negation Quantum Decision Model to Predict the Interference Effect in
Categorization
- Authors: Qinyuan Wu and Yong Deng
- Abstract summary: An interference effect is caused by categorization in some cases, which breaks the total probability principle.
A negation quantum model (NQ model) is developed in this article to predict the interference.
- Score: 3.997680012976965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Categorization is a significant task in decision-making, which is a key part
of human behavior. An interference effect is caused by categorization in some
cases, which breaks the total probability principle. A negation quantum model
(NQ model) is developed in this article to predict the interference. Taking the
advantage of negation to bring more information in the distribution from a
different perspective, the proposed model is a combination of the negation of a
probability distribution and the quantum decision model. Information of the
phase contained in quantum probability and the special calculation method to it
can easily represented the interference effect. The results of the proposed NQ
model is closely to the real experiment data and has less error than the
existed models.
Related papers
- What is \textit{Quantum} in Probabilistic Explanations of the Sure Thing
Principle Violation? [0.0]
The Prisoner's Dilemma game (PDG) is one of the simple test-beds for the probabilistic nature of the human decision-making process.
Quantum probabilistic models can explain this violation as a second-order interference effect.
We discuss the role of other quantum information-theoretical quantities, such as quantum entanglement, in the decision-making process.
arXiv Detail & Related papers (2023-06-21T00:01:01Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Trainability barriers and opportunities in quantum generative modeling [0.0]
We investigate the barriers to the trainability of quantum generative models.
We show that using implicit generative models with explicit losses leads to a new flavour of barren plateau.
We propose a new local quantum fidelity-type loss which, by leveraging quantum circuits, is both faithful and enjoys trainability guarantees.
arXiv Detail & Related papers (2023-05-04T14:45:02Z) - Linking a predictive model to causal effect estimation [21.869233469885856]
This paper first tackles the challenge of estimating the causal effect of any feature (as the treatment) on the outcome w.r.t. a given instance.
The theoretical results naturally link a predictive model to causal effect estimations and imply that a predictive model is causally interpretable.
We use experiments to demonstrate that various types of predictive models, when satisfying the conditions identified in this paper, can estimate the causal effects of features as accurately as state-of-the-art causal effect estimation methods.
arXiv Detail & Related papers (2023-04-10T13:08:16Z) - Probabilistic Variational Causal Effect as A new Theory for Causal
Reasoning [0.0]
We introduce a new causal framework capable of dealing with probabilistic and non-probabilistic problems.
Our formula of causal effect uses the idea of total variation of a function integrated with probability theory.
arXiv Detail & Related papers (2022-08-12T13:34:17Z) - Variance Minimization in the Wasserstein Space for Invariant Causal
Prediction [72.13445677280792]
In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors.
Each of these tests relies on the minimization of a novel loss function that is derived from tools in optimal transport theory.
We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
arXiv Detail & Related papers (2021-10-13T22:30:47Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - More Causes Less Effect: Destructive Interference in Decision Making [0.0]
We present a new experiment demonstrating destructive interference in customers' estimates of conditional probabilities of product failure.
We show that when combined, the two causes produce the opposite effect.
Such negative interference of two or more reasons may be exploited for better modeling the cognitive processes taking place in the customers' mind.
arXiv Detail & Related papers (2021-06-20T13:34:19Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Latent Causal Invariant Model [128.7508609492542]
Current supervised learning can learn spurious correlation during the data-fitting process.
We propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
arXiv Detail & Related papers (2020-11-04T10:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.