Unifying AI Algorithms with Probabilistic Programming using Implicitly
Defined Representations
- URL: http://arxiv.org/abs/2110.02325v1
- Date: Tue, 5 Oct 2021 19:49:30 GMT
- Title: Unifying AI Algorithms with Probabilistic Programming using Implicitly
Defined Representations
- Authors: Avi Pfeffer, Michael Harradon, Joseph Campolongo, Sanja Cvijic
- Abstract summary: Scruff is a new framework for developing AI systems using probabilistic programming.
It enables a variety of representations to be included, such as code with choices, neural networks, differential equations, and constraint systems.
We show how a relatively small set of operations can serve to unify a variety of AI algorithms.
- Score: 0.2580765958706854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Scruff, a new framework for developing AI systems using
probabilistic programming. Scruff enables a variety of representations to be
included, such as code with stochastic choices, neural networks, differential
equations, and constraint systems. These representations are defined implicitly
using a set of standardized operations that can be performed on them.
General-purpose algorithms are then implemented using these operations,
enabling generalization across different representations. Zero, one, or more
operation implementations can be provided for any given representation, giving
algorithms the flexibility to use the most appropriate available
implementations for their purposes and enabling representations to be used in
ways that suit their capabilities. In this paper, we explain the general
approach of implicitly defined representations and provide a variety of
examples of representations at varying degrees of abstraction. We also show how
a relatively small set of operations can serve to unify a variety of AI
algorithms. Finally, we discuss how algorithms can use policies to choose which
operation implementations to use during execution.
Related papers
- Representation Synthesis by Probabilistic Many-Valued Logic Operation in Self-Supervised Learning [9.339914898177186]
We propose a new self-supervised learning (SSL) method for representations that enable logic operations.
Our method can generate a representation that has the features of both representations or only those features common to both representations.
Experiments on image retrieval using MNIST and PascalVOC showed that the representations of our method can be operated by OR and AND operations.
arXiv Detail & Related papers (2023-09-08T06:24:44Z) - Multivariate Systemic Risk Measures and Computation by Deep Learning
Algorithms [63.03966552670014]
We discuss the key related theoretical aspects, with a particular focus on the fairness properties of primal optima and associated risk allocations.
The algorithms we provide allow for learning primals, optima for the dual representation and corresponding fair risk allocations.
arXiv Detail & Related papers (2023-02-02T22:16:49Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - On Reinforcement Learning, Effect Handlers, and the State Monad [0.0]
We study the effects and handlers as a way to support decision-making abstractions in functional programs.
We express the underlying intelligence as a reinforcement learning algorithm implemented as a set of handlers for some of these operations.
We conclude by hinting at how type and effect handlers could ensure safety properties.
arXiv Detail & Related papers (2022-03-29T10:46:58Z) - Non-Stationary Representation Learning in Sequential Linear Bandits [22.16801879707937]
We study representation learning for multi-task decision-making in non-stationary environments.
We propose an online algorithm that facilitates efficient decision-making by learning and transferring non-stationary representations in an adaptive fashion.
arXiv Detail & Related papers (2022-01-13T06:13:03Z) - Dynamic programming by polymorphic semiring algebraic shortcut fusion [1.9405875431318445]
Dynamic programming (DP) is an algorithmic design paradigm for the efficient, exact solution of intractable, problems.
This paper presents a rigorous algebraic formalism for systematically deriving DP algorithms, based on semiring.
arXiv Detail & Related papers (2021-07-05T00:51:02Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.