Smart Choices and the Selection Monad
- URL: http://arxiv.org/abs/2007.08926v9
- Date: Wed, 19 Apr 2023 07:36:39 GMT
- Title: Smart Choices and the Selection Monad
- Authors: Martin Abadi, Gordon Plotkin
- Abstract summary: Describing systems in terms of choices and their resulting costs and rewards offers the promise of freeing algorithm designers and programmers from specifying how those choices should be made.
We define two small languages that support decision-making abstractions: one with choices and rewards, and the other additionally with probabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Describing systems in terms of choices and their resulting costs and rewards
offers the promise of freeing algorithm designers and programmers from
specifying how those choices should be made; in implementations, the choices
can be realized by optimization techniques and, increasingly, by
machine-learning methods. We study this approach from a programming-language
perspective. We define two small languages that support decision-making
abstractions: one with choices and rewards, and the other additionally with
probabilities. We give both operational and denotational semantics.
In the case of the second language we consider three denotational semantics,
with varying degrees of correlation between possible program values and
expected rewards. The operational semantics combine the usual semantics of
standard constructs with optimization over spaces of possible execution
strategies. The denotational semantics, which are compositional, rely on the
selection monad, to handle choice, augmented with an auxiliary monad to handle
other effects, such as rewards or probability.
We establish adequacy theorems that the two semantics coincide in all cases.
We also prove full abstraction at base types, with varying notions of
observation in the probabilistic case corresponding to the various degrees of
correlation. We present axioms for choice combined with rewards and
probability, establishing completeness at base types for the case of rewards
without probability.
Related papers
- Operator-based semantics for choice programs: is choosing losing? (full version) [6.983702226751596]
Only two-valued semantics have been studied so far.
An operator-based framework allows for the definition and comparison of different semantics in a principled way.
arXiv Detail & Related papers (2024-07-31T12:25:57Z) - dPASP: A Comprehensive Differentiable Probabilistic Answer Set
Programming Environment For Neurosymbolic Learning and Reasoning [0.0]
We present dPASP, a novel declarative logic programming framework for differentiable neuro-symbolic reasoning.
We discuss the several semantics for probabilistic logic programs that can express nondeterministic, contradictory, incomplete and/or statistical knowledge.
We then describe an implemented package that supports inference and learning in the language, along with several example programs.
arXiv Detail & Related papers (2023-08-05T19:36:58Z) - $\omega$PAP Spaces: Reasoning Denotationally About Higher-Order,
Recursive Probabilistic and Differentiable Programs [64.25762042361839]
$omega$PAP spaces are spaces for reasoning denotationally about expressive differentiable and probabilistic programming languages.
Our semantics is general enough to assign meanings to most practical probabilistic and differentiable programs.
We establish the almost-everywhere differentiability of probabilistic programs' trace density functions.
arXiv Detail & Related papers (2023-02-21T12:50:05Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - The Many-Worlds Calculus [0.0]
We propose a colored PROP to model computation in this framework.
The model can support regular tests, probabilistic and non-deterministic branching, as well as quantum branching.
We prove the language to be universal, and the equational theory to be complete with respect to this semantics.
arXiv Detail & Related papers (2022-06-21T10:10:26Z) - On Reinforcement Learning, Effect Handlers, and the State Monad [0.0]
We study the effects and handlers as a way to support decision-making abstractions in functional programs.
We express the underlying intelligence as a reinforcement learning algorithm implemented as a set of handlers for some of these operations.
We conclude by hinting at how type and effect handlers could ensure safety properties.
arXiv Detail & Related papers (2022-03-29T10:46:58Z) - Probabilistic Warp Consistency for Weakly-Supervised Semantic
Correspondences [118.6018141306409]
We propose Probabilistic Warp Consistency, a weakly-supervised learning objective for semantic matching.
We first construct an image triplet by applying a known warp to one of the images in a pair depicting different instances of the same object class.
Our objective also brings substantial improvements in the strongly-supervised regime, when combined with keypoint annotations.
arXiv Detail & Related papers (2022-03-08T18:55:11Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.