Surprise Minimization Revision Operators
- URL: http://arxiv.org/abs/2111.10896v1
- Date: Sun, 21 Nov 2021 20:38:50 GMT
- Title: Surprise Minimization Revision Operators
- Authors: Adrian Haret
- Abstract summary: We propose a measure of surprise, dubbed relative surprise, in which surprise is computed with respect to the prior belief.
We characterize the surprise minimization revision operator thus defined using a set of intuitive postulates in the AGM mould.
- Score: 7.99536002595393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prominent approaches to belief revision prescribe the adoption of a new
belief that is as close as possible to the prior belief, in a process that,
even in the standard case, can be described as attempting to minimize surprise.
Here we extend the existing model by proposing a measure of surprise, dubbed
relative surprise, in which surprise is computed with respect not just to the
prior belief, but also to the broader context provided by the new information,
using a measure derived from familiar distance notions between truth-value
assignments. We characterize the surprise minimization revision operator thus
defined using a set of intuitive rationality postulates in the AGM mould, along
the way obtaining representation results for other existing revision operators
in the literature, such as the Dalal operator and a recently introduced
distance-based min-max operator.
Related papers
- Reasoning about unpredicted change and explicit time [10.220888127527152]
Reasoning about unpredicted change consists in explaining observations by events.
We propose here an approach for explaining time-stamped observations by surprises, which are simple events consisting in the change of the truth value of a fluent.
arXiv Detail & Related papers (2024-07-09T07:49:57Z) - Plausible Extractive Rationalization through Semi-Supervised Entailment
Signal [33.35604728012685]
We take a semi-supervised approach to optimize for the plausibility of extracted rationales.
We adopt a pre-trained natural language inference (NLI) model and further fine-tune it on a small set of supervised rationales.
We show that, by enforcing the alignment agreement between the explanation and answer in a question-answering task, the performance can be improved without access to ground truth labels.
arXiv Detail & Related papers (2024-02-13T14:12:32Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Marginalized Operators for Off-policy Reinforcement Learning [53.37381513736073]
Marginalized operators strictly generalize generic multi-step operators, such as Retrace, as special cases.
We show that the estimates for marginalized operators can be computed in a scalable way, which also generalizes prior results on marginalized importance sampling as special cases.
arXiv Detail & Related papers (2022-03-30T09:59:59Z) - Approximability and Generalisation [0.0]
We study the role of approximability in learning, both in the full precision and the approximated settings of the predictor.
We show that under mild conditions, approximable target concepts are learnable from a smaller labelled sample.
We give algorithms that guarantee a good predictor whose approximation also enjoys the same generalisation guarantees.
arXiv Detail & Related papers (2022-03-15T15:21:48Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - On the use of evidence theory in belief base revision [0.0]
We propose the idea of credible belief base revision yielding to define two new formula-based revision operators.
These operators stem from consistent subbases maximal with respect to credibility instead of set inclusion and cardinality.
arXiv Detail & Related papers (2020-09-24T12:45:32Z) - Latent Unexpected Recommendations [89.2011481379093]
We propose to model unexpectedness in the latent space of user and item embeddings, which allows to capture hidden and complex relations between new recommendations and historic purchases.
In addition, we develop a novel Latent Closure (LC) method to construct hybrid utility function and provide unexpected recommendations based on the proposed model.
arXiv Detail & Related papers (2020-07-27T02:39:30Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.