Reasoning about unpredicted change and explicit time
- URL: http://arxiv.org/abs/2407.06622v1
- Date: Tue, 9 Jul 2024 07:49:57 GMT
- Title: Reasoning about unpredicted change and explicit time
- Authors: Florence Dupin de Saint-Cyr, Jérôme Lang,
- Abstract summary: Reasoning about unpredicted change consists in explaining observations by events.
We propose here an approach for explaining time-stamped observations by surprises, which are simple events consisting in the change of the truth value of a fluent.
- Score: 10.220888127527152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning about unpredicted change consists in explaining observations by events; we propose here an approach for explaining time-stamped observations by surprises, which are simple events consisting in the change of the truth value of a fluent. A framework for dealing with surprises is defined. Minimal sets of surprises are provided together with time intervals where each surprise has occurred, and they are characterized from a model-based diagnosis point of view. Then, a probabilistic approach of surprise minimisation is proposed.
Related papers
- Classical Statistical (In-Sample) Intuitions Don't Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs [11.893324664457548]
We show that there is another reason why we observe behaviors today that appear at odds with intuitions taught in classical statistics textbooks.
We highlight that this simple move from fixed to random designs has far-reaching consequences on textbook intuitions.
arXiv Detail & Related papers (2024-09-27T15:36:24Z) - Combination of Weak Learners eXplanations to Improve Random Forest
eXplicability Robustness [0.0]
The notion of robustness in XAI refers to the observed variations in the explanation of the prediction of a learned model.
We argue that a combination through discriminative averaging of ensembles weak learners explanations can improve the robustness of explanations in ensemble methods.
arXiv Detail & Related papers (2024-02-29T10:37:40Z) - Tracking Changing Probabilities via Dynamic Learners [0.18648070031379424]
We develop sparse multiclass moving average techniques to respond to non-stationarities in a timely manner.
One technique is based on the exponentiated moving average (EMA) and another is based on queuing a few count snapshots.
arXiv Detail & Related papers (2024-02-15T17:48:58Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Ensembled Prediction Intervals for Causal Outcomes Under Hidden
Confounding [49.1865229301561]
We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals.
The last of our three diverse benchmarks is a novel usage of GPT-4 for observational experiments with unknown but probeable ground truth.
arXiv Detail & Related papers (2023-06-15T21:42:40Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Surprise Minimization Revision Operators [7.99536002595393]
We propose a measure of surprise, dubbed relative surprise, in which surprise is computed with respect to the prior belief.
We characterize the surprise minimization revision operator thus defined using a set of intuitive postulates in the AGM mould.
arXiv Detail & Related papers (2021-11-21T20:38:50Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.