Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures
- URL: http://arxiv.org/abs/2002.11275v2
- Date: Fri, 25 Sep 2020 22:26:02 GMT
- Title: Adversarial Monte Carlo Meta-Learning of Optimal Prediction Procedures
- Authors: Alex Luedtke, Incheoul Chung, Oleg Sofrygin
- Abstract summary: We frame the meta-learning of prediction procedures as a search for an optimal strategy in a two-player game.
In this game, Nature selects a prior overparametric distributions that generate labeled data consisting of features and an associated outcome.
The Predictor's objective is to learn a function that maps from a new feature to an estimate of the associated outcome.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We frame the meta-learning of prediction procedures as a search for an
optimal strategy in a two-player game. In this game, Nature selects a prior
over distributions that generate labeled data consisting of features and an
associated outcome, and the Predictor observes data sampled from a distribution
drawn from this prior. The Predictor's objective is to learn a function that
maps from a new feature to an estimate of the associated outcome. We establish
that, under reasonable conditions, the Predictor has an optimal strategy that
is equivariant to shifts and rescalings of the outcome and is invariant to
permutations of the observations and to shifts, rescalings, and permutations of
the features. We introduce a neural network architecture that satisfies these
properties. The proposed strategy performs favorably compared to standard
practice in both parametric and nonparametric experiments.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Optimizing accuracy and diversity: a multi-task approach to forecast
combinations [0.0]
We present a multi-task optimization paradigm that focuses on solving both problems simultaneously.
It incorporates an additional learning and optimization task into the standard feature-based forecasting approach.
The proposed approach elicits the essential role of diversity in feature-based forecasting.
arXiv Detail & Related papers (2023-10-31T15:26:33Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Bayesian Experimental Design for Symbolic Discovery [12.855710007840479]
We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior.
A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
arXiv Detail & Related papers (2022-11-29T01:25:29Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z) - Adaptive Sequential Design for a Single Time-Series [2.578242050187029]
We learn an optimal, unknown choice of the controlled components of a design in order to optimize the expected outcome.
We adapt the randomization mechanism for future time-point experiments based on the data collected on the individual over time.
arXiv Detail & Related papers (2021-01-29T22:51:45Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - A Hybrid Two-layer Feature Selection Method Using GeneticAlgorithm and
Elastic Net [6.85316573653194]
This paper presents a new hybrid two-layer feature selection approach that combines a wrapper and an embedded method.
The Genetic Algorithm(GA) has been adopted as a wrapper to search for the optimal subset of predictors.
A second layer is added to the proposed method to eliminate any remaining redundant/irrelevant predictors to improve the prediction accuracy.
arXiv Detail & Related papers (2020-01-30T05:01:30Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.