Contracting and Involutive Negations of Probability Distributions
- URL: http://arxiv.org/abs/2103.16176v1
- Date: Tue, 30 Mar 2021 08:58:08 GMT
- Title: Contracting and Involutive Negations of Probability Distributions
- Authors: Ildar Batyrshin
- Abstract summary: We show that Yager negator plays a crucial role in the definition of pd-independent linear negators.
We introduce an involutive negator in the class of pd-dependent negators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A dozen papers have considered the concept of negation of probability
distributions (pd) introduced by Yager. Usually, such negations are generated
point-by-point by functions defined on a set of probability values and called
here negators. Recently it was shown that Yager negator plays a crucial role in
the definition of pd-independent linear negators: any linear negator is a
function of Yager negator. Here, we prove that the sequence of multiple
negations of pd generated by a linear negator converges to the uniform
distribution with maximal entropy. We show that any pd-independent negator is
non-involutive, and any non-trivial linear negator is strictly contracting.
Finally, we introduce an involutive negator in the class of pd-dependent
negators that generates an involutive negation of probability distributions.
Related papers
- Positive operator-valued kernels and non-commutative probability [0.0]
We prove new factorization and dilation results for general positive operator-valued kernels.
We present their implications for associated Hilbert space-valued Gaussian processes.
arXiv Detail & Related papers (2024-05-15T13:16:11Z) - Negative Probability [0.6906005491572398]
Negative probabilities arise primarily in quantum theory and computing.
Negative probabilities arise as mixing distributions of unobserved latent variables in Bayesian modeling.
Examples of dual densities with negative mixing measures are provided.
arXiv Detail & Related papers (2024-05-05T20:09:49Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - Debiased Contrastive Learning of Unsupervised Sentence Representations [88.58117410398759]
Contrastive learning is effective in improving pre-trained language models (PLM) to derive high-quality sentence representations.
Previous works mostly adopt in-batch negatives or sample from training data at random.
We present a new framework textbfDCLR to alleviate the influence of these improper negatives.
arXiv Detail & Related papers (2022-05-02T05:07:43Z) - Investigating the Role of Negatives in Contrastive Representation
Learning [59.30700308648194]
Noise contrastive learning is a popular technique for unsupervised representation learning.
We focus on disambiguating the role of one of these parameters: the number of negative examples.
We find that the results broadly agree with our theory, while our vision experiments are murkier with performance sometimes even being insensitive to the number of negatives.
arXiv Detail & Related papers (2021-06-18T06:44:16Z) - Generating Negations of Probability Distributions [0.0]
We consider negations of probability distributions as point-by-point transformations of pd.
We give a characterization of linear negators as a convex combination of Yager and uniform negators.
arXiv Detail & Related papers (2021-03-27T20:24:10Z) - Positive-Congruent Training: Towards Regression-Free Model Updates [87.25247195148187]
In image classification, sample-wise inconsistencies appear as "negative flips"
A new model incorrectly predicts the output for a test sample that was correctly classified by the old (reference) model.
We propose a simple approach for PC training, Focal Distillation, which enforces congruence with the reference model.
arXiv Detail & Related papers (2020-11-18T09:00:44Z) - Exponential Negation of a Probability Distribution [7.895866278697778]
The proposed negation can be seen as a kind of geometry negation.
The number of iterations of convergence is inversely proportional to the number of elements in the distribution.
arXiv Detail & Related papers (2020-10-22T08:46:51Z) - On Negative Transfer and Structure of Latent Functions in Multi-output
Gaussian Processes [2.538209532048867]
In this article, we first define negative transfer in the context of an $mathcalMGP$ and then derive necessary conditions for an $mathcalMGP$ model to avoid negative transfer.
We show that avoiding negative transfer is mainly dependent on having a sufficient number of latent functions $Q$.
We propose two latent structures that scale to arbitrarily large datasets, can avoid negative transfer and allow any kernel or sparse approximations to be used within.
arXiv Detail & Related papers (2020-04-06T02:47:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.