Rich Feature Construction for the Optimization-Generalization Dilemma
- URL: http://arxiv.org/abs/2203.15516v1
- Date: Thu, 24 Mar 2022 20:39:33 GMT
- Title: Rich Feature Construction for the Optimization-Generalization Dilemma
- Authors: Jianyu Zhang, David Lopez-Paz, L\'eon Bottou
- Abstract summary: We construct a rich representation (RFC) containing a palette of potentially useful features, ready to be used by models.
RFC consistently helps six OoD methods achieve top performance on challenging invariant training benchmarks.
On the realistic Camelyon17 task, our method helps both OoD and methods outperform earlier compatable results by at least $5%$.
- Score: 18.721567020497968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There often is a dilemma between ease of optimization and robust
out-of-distribution (OoD) generalization. For instance, many OoD methods rely
on penalty terms whose optimization is challenging. They are either too strong
to optimize reliably or too weak to achieve their goals.
In order to escape this dilemma, we propose to first construct a rich
representation (RFC) containing a palette of potentially useful features, ready
to be used by even simple models. On the one hand, a rich representation
provides a good initialization for the optimizer. On the other hand, it also
provides an inductive bias that helps OoD generalization. RFC is constructed in
a succession of training episodes. During each step of the discovery phase, we
craft a multi-objective optimization criterion and its associated datasets in a
manner that prevents the network from using the features constructed in the
previous iterations. During the synthesis phase, we use knowledge distillation
to force the network to simultaneously develop all the features identified
during the discovery phase.
RFC consistently helps six OoD methods achieve top performance on challenging
invariant training benchmarks, ColoredMNIST (Arjovsky et al., 2020).
Furthermore, on the realistic Camelyon17 task, our method helps both OoD and
ERM methods outperform earlier compatable results by at least $5\%$, reduce
standard deviation by at least $4.1\%$, and makes hyperparameter tuning and
model selection more reliable.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - HomOpt: A Homotopy-Based Hyperparameter Optimization Method [10.11271414863925]
We propose HomOpt, a data-driven approach based on a generalized additive model (GAM) surrogate combined with homotopy optimization.
We show how HomOpt can boost the performance and effectiveness of any given method with faster convergence to the optimum on continuous discrete, and categorical domain spaces.
arXiv Detail & Related papers (2023-08-07T06:01:50Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Robust expected improvement for Bayesian optimization [1.8130068086063336]
We propose a surrogate modeling and active learning technique called robust expected improvement (REI) that ports adversarial methodology into the BO/GP framework.
We illustrate and draw comparisons to several competitors on benchmark synthetic exercises and real problems of varying complexity.
arXiv Detail & Related papers (2023-02-16T22:34:28Z) - Tensor Train for Global Optimization Problems in Robotics [6.702251803443858]
The convergence of many numerical optimization techniques is highly dependent on the initial guess given to the solver.
We propose a novel approach that utilizes methods to initialize existing optimization solvers near global optima.
We show that the proposed method can generate samples close to global optima and from multiple modes.
arXiv Detail & Related papers (2022-06-10T13:18:26Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis [16.499651513178012]
Distributionally robust optimization (DRO) is a widely-used approach to learn models that are robust against distribution shift.
We provide non-asymptotic convergence guarantees even though the objective function is possibly prominent nonsmooth- and has normalized gradient descent.
arXiv Detail & Related papers (2021-10-24T14:56:38Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.