Truncated Inference for Latent Variable Optimization Problems:
Application to Robust Estimation and Learning
- URL: http://arxiv.org/abs/2003.05886v1
- Date: Thu, 12 Mar 2020 16:32:06 GMT
- Title: Truncated Inference for Latent Variable Optimization Problems:
Application to Robust Estimation and Learning
- Authors: Christopher Zach, Huu Le
- Abstract summary: We propose two formally justified methods to remove the need to maintain the latent variables.
These methods have applications in large scale robust estimation and in learning energy-based models from labeled data.
- Score: 32.08441889054456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization problems with an auxiliary latent variable structure in addition
to the main model parameters occur frequently in computer vision and machine
learning. The additional latent variables make the underlying optimization task
expensive, either in terms of memory (by maintaining the latent variables), or
in terms of runtime (repeated exact inference of latent variables). We aim to
remove the need to maintain the latent variables and propose two formally
justified methods, that dynamically adapt the required accuracy of latent
variable inference. These methods have applications in large scale robust
estimation and in learning energy-based models from labeled data.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Adaptive Robust Learning using Latent Bernoulli Variables [50.223140145910904]
We present an adaptive approach for learning from corrupted training sets.
We identify corrupted non-corrupted samples with latent Bernoulli variables.
The resulting problem is solved via variational inference.
arXiv Detail & Related papers (2023-12-01T13:50:15Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Interpretable Latent Variables in Deep State Space Models [4.884336328409872]
We introduce a new version of deep state-space models (DSSMs) that combines a recurrent neural network with a state-space framework to forecast time series data.
The model estimates the observed series as functions of latent variables that evolve non-linearly through time.
arXiv Detail & Related papers (2022-03-03T23:10:58Z) - The Parametric Cost Function Approximation: A new approach for
multistage stochastic programming [4.847980206213335]
We show that a parameterized version of a deterministic optimization model can be an effective way of handling uncertainty without the complexity of either programming or dynamic programming.
This approach can handle complex, high-dimensional state variables, and avoids the usual approximations associated with scenario trees or value function approximations.
arXiv Detail & Related papers (2022-01-01T23:25:09Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - Conditional Generative Modeling via Learning the Latent Space [54.620761775441046]
We propose a novel framework for conditional generation in multimodal spaces.
It uses latent variables to model generalizable learning patterns.
At inference, the latent variables are optimized to find optimal solutions corresponding to multiple output modes.
arXiv Detail & Related papers (2020-10-07T03:11:34Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.