Bayesian Learning for Dynamic Inference
- URL: http://arxiv.org/abs/2301.00032v1
- Date: Fri, 30 Dec 2022 19:16:23 GMT
- Title: Bayesian Learning for Dynamic Inference
- Authors: Aolin Xu, Peng Guan
- Abstract summary: In some sequential estimation problems, the future values of the quantity to be estimated depend on the estimate of its current value.
We formulate the Bayesian learning problem for dynamic inference, where the unknown quantity-generation model is assumed to be randomly drawn.
We derive the optimal Bayesian learning rules, both offline and online, to minimize the inference loss.
- Score: 2.2843885788439793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The traditional statistical inference is static, in the sense that the
estimate of the quantity of interest does not affect the future evolution of
the quantity. In some sequential estimation problems however, the future values
of the quantity to be estimated depend on the estimate of its current value.
This type of estimation problems has been formulated as the dynamic inference
problem. In this work, we formulate the Bayesian learning problem for dynamic
inference, where the unknown quantity-generation model is assumed to be
randomly drawn according to a random model parameter. We derive the optimal
Bayesian learning rules, both offline and online, to minimize the inference
loss. Moreover, learning for dynamic inference can serve as a meta problem,
such that all familiar machine learning problems, including supervised
learning, imitation learning and reinforcement learning, can be cast as its
special cases or variants. Gaining a good understanding of this unifying meta
problem thus sheds light on a broad spectrum of machine learning problems as
well.
Related papers
- Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - A Mathematical Model of the Hidden Feedback Loop Effect in Machine Learning Systems [44.99833362998488]
We introduce a repeated learning process to jointly describe several phenomena attributed to unintended hidden feedback loops.
A distinctive feature of such repeated learning setting is that the state of the environment becomes causally dependent on the learner itself over time.
We present a novel dynamical systems model of the repeated learning process and prove the limiting set of probability distributions for positive and negative feedback loop modes.
arXiv Detail & Related papers (2024-05-04T17:57:24Z) - Loss Dynamics of Temporal Difference Reinforcement Learning [36.772501199987076]
We study the case learning curves for temporal difference learning of a value function with linear function approximators.
We study how learning dynamics and plateaus depend on feature structure, learning rate, discount factor, and reward function.
arXiv Detail & Related papers (2023-07-10T18:17:50Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Reinforcement Learning in System Identification [0.0]
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering.
Here we explore the use of Reinforcement Learning in this problem.
We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
arXiv Detail & Related papers (2022-12-14T09:20:42Z) - Dynamic Inference [4.568777157687959]
In some sequential estimation problems, the future values of the quantity to be estimated depend on the estimate of its current value.
Examples include stock price prediction by big investors, interactive product recommendation, and behavior prediction in multi-agent systems.
In this work, a formulation of this problem under a Bayesian probabilistic framework is given, and the optimal estimation strategy is derived as the solution to minimize the overall inference loss.
arXiv Detail & Related papers (2021-11-29T17:50:22Z) - Stateful Offline Contextual Policy Evaluation and Learning [88.9134799076718]
We study off-policy evaluation and learning from sequential data.
We formalize the relevant causal structure of problems such as dynamic personalized pricing.
We show improved out-of-sample policy performance in this class of relevant problems.
arXiv Detail & Related papers (2021-10-19T16:15:56Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.