EEE, Remediating the failure of machine learning models via a
network-based optimization patch
- URL: http://arxiv.org/abs/2304.11321v1
- Date: Sat, 22 Apr 2023 05:23:46 GMT
- Title: EEE, Remediating the failure of machine learning models via a
network-based optimization patch
- Authors: Ruiyuan Kang, Dimitrios Kyritsis, Panos Liatsis
- Abstract summary: A network-based optimization approach, EEE, is proposed for the purpose of providing validation-viable state estimations.
It is shown that EEE is either as competitive or outperforms popular optimization methods, in terms of efficiency and convergence.
- Score: 2.449329947677678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A network-based optimization approach, EEE, is proposed for the purpose of
providing validation-viable state estimations to remediate the failure of
pretrained models. To improve optimization efficiency and convergence, the most
important metrics in the context of this research, we follow a three-faceted
approach based on the error from the validation process. Firstly, we improve
the information content of the error by designing a validation module to
acquire high-dimensional error information. Next, we reduce the uncertainty of
error transfer by employing an ensemble of error estimators, which only learn
implicit errors, and use Constrained Ensemble Exploration to collect high-value
data. Finally, the effectiveness of error utilization is improved by using
ensemble search to determine the most prosperous state. The benefits of the
proposed framework are demonstrated on four real-world engineering problems
with diverse state dimensions. It is shown that EEE is either as competitive or
outperforms popular optimization methods, in terms of efficiency and
convergence.
Related papers
- Optimizers Qualitatively Alter Solutions And We Should Leverage This [62.662640460717476]
Deep Neural Networks (DNNs) can not guarantee convergence to a unique global minimum of the loss when using only local information, such as SGD.<n>We argue that the community should aim at understanding the biases of already existing methods, as well as aim to build new DNNs with the explicit intent of inducing certain properties of the solution.
arXiv Detail & Related papers (2025-07-16T13:33:31Z) - KARE-RAG: Knowledge-Aware Refinement and Enhancement for RAG [63.82127103851471]
Retrieval-Augmented Generation (RAG) enables large language models to access broader knowledge sources.<n>We demonstrate that enhancing generative models' capacity to process noisy content is equally critical for robust performance.<n>We present KARE-RAG, which improves knowledge utilization through three key innovations.
arXiv Detail & Related papers (2025-06-03T06:31:17Z) - Respecting the limit:Bayesian optimization with a bound on the optimal value [3.004066195320147]
We study the scenario that we have either exact knowledge of the minimum value or a, possibly, lower bound on its value.
We present SlogGP, a new surrogate model that incorporates bound information and adapts the Expected Improvement (EI) acquisition function accordingly.
arXiv Detail & Related papers (2024-11-07T14:27:49Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Optimizer's Information Criterion: Dissecting and Correcting Bias in Data-Driven Optimization [16.57676001669012]
In data-driven optimization, the sample performance of the obtained decision typically incurs an optimistic bias against the true performance.
Common techniques to correct this bias, such as cross-validation, require repeatedly solving additional optimization problems and are therefore expensive.
We develop a general bias correction approach that directly approximates the first-order bias and does not require solving any additional optimization problems.
arXiv Detail & Related papers (2023-06-16T07:07:58Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Learning to Refit for Convex Learning Problems [11.464758257681197]
We propose a framework to learn to estimate optimized model parameters for different training sets using neural networks.
We rigorously characterize the power of neural networks to approximate convex problems.
arXiv Detail & Related papers (2021-11-24T15:28:50Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Interpreting Rate-Distortion of Variational Autoencoder and Using Model
Uncertainty for Anomaly Detection [5.491655566898372]
We build a scalable machine learning system for unsupervised anomaly detection via representation learning.
We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error.
We show empirically the competitive performance of our approach on benchmark datasets.
arXiv Detail & Related papers (2020-05-05T00:03:48Z) - Decomposed Adversarial Learned Inference [118.27187231452852]
We propose a novel approach, Decomposed Adversarial Learned Inference (DALI)
DALI explicitly matches prior and conditional distributions in both data and code spaces.
We validate the effectiveness of DALI on the MNIST, CIFAR-10, and CelebA datasets.
arXiv Detail & Related papers (2020-04-21T20:00:35Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.