Representations of epistemic uncertainty and awareness in data-driven
strategies
- URL: http://arxiv.org/abs/2110.11482v7
- Date: Sun, 19 Nov 2023 15:00:11 GMT
- Title: Representations of epistemic uncertainty and awareness in data-driven
strategies
- Authors: Mario Angelelli, Massimiliano Gervasi
- Abstract summary: We present a theoretical model for uncertainty in knowledge representation and its transfer mediated by agents.
We look at inequivalent knowledge representations in terms of inferences, preference relations, and information measures.
We discuss some implications of the proposed model for data-driven strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The diffusion of AI and big data is reshaping decision-making processes by
increasing the amount of information that supports decisions while reducing
direct interaction with data and empirical evidence. This paradigm shift
introduces new sources of uncertainty, as limited data observability results in
ambiguity and a lack of interpretability. The need for the proper analysis of
data-driven strategies motivates the search for new models that can describe
this type of bounded access to knowledge. This contribution presents a novel
theoretical model for uncertainty in knowledge representation and its transfer
mediated by agents. We provide a dynamical description of knowledge states by
endowing our model with a structure to compare and combine them. Specifically,
an update is represented through combinations, and its explainability is based
on its consistency in different dimensional representations. We look at
inequivalent knowledge representations in terms of multiplicity of inferences,
preference relations, and information measures. Furthermore, we define a formal
analogy with two scenarios that illustrate non-classical uncertainty in terms
of ambiguity (Ellsberg's model) and reasoning about knowledge mediated by other
agents observing data (Wigner's friend). Finally, we discuss some implications
of the proposed model for data-driven strategies, with special attention to
reasoning under uncertainty about business value dimensions and the design of
measurement tools for their assessment.
Related papers
- A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the greater scope of Deep Learning-based computer vision.
Uncertainty quantification has been extensively studied within this context, enabling expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision making.
This work provides a comprehensive overview of probabilistic segmentation by discussing fundamental concepts in uncertainty that govern advancements in the field and the application to various tasks.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Incorporating Domain Knowledge in Deep Neural Networks for Discrete
Choice Models [0.5801044612920815]
This paper proposes a framework that expands the potential of data-driven approaches for DCM.
It includes pseudo data samples that represent required relationships and a loss function that measures their fulfillment.
A case study demonstrates the potential of this framework for discrete choice analysis.
arXiv Detail & Related papers (2023-05-30T12:53:55Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Are Metrics Enough? Guidelines for Communicating and Visualizing
Predictive Models to Subject Matter Experts [7.768301998812552]
We describe an iterative study conducted with both subject matter experts and data scientists to understand the gaps in communication.
We derive a set of communication guidelines that use visualization as a common medium for communicating the strengths and weaknesses of a model.
arXiv Detail & Related papers (2022-05-11T19:40:24Z) - Towards Robust and Adaptive Motion Forecasting: A Causal Representation
Perspective [72.55093886515824]
We introduce a causal formalism of motion forecasting, which casts the problem as a dynamic process with three groups of latent variables.
We devise a modular architecture that factorizes the representations of invariant mechanisms and style confounders to approximate a causal graph.
Experiment results on synthetic and real datasets show that our three proposed components significantly improve the robustness and reusability of the learned motion representations.
arXiv Detail & Related papers (2021-11-29T18:59:09Z) - Discriminative Multimodal Learning via Conditional Priors in Generative
Models [21.166519800652047]
This research studies the realistic scenario in which all modalities and class labels are available for model training.
We show, in this scenario, that the variational lower bound limits mutual information between joint representations and missing modalities.
arXiv Detail & Related papers (2021-10-09T17:22:24Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.