Outage Performance and Novel Loss Function for an ML-Assisted Resource
Allocation: An Exact Analytical Framework
- URL: http://arxiv.org/abs/2305.09739v2
- Date: Sat, 18 Nov 2023 09:08:14 GMT
- Title: Outage Performance and Novel Loss Function for an ML-Assisted Resource
Allocation: An Exact Analytical Framework
- Authors: Nidhi Simmons, David E Simmons, Michel Daoud Yacoub
- Abstract summary: We introduce a novel loss function to minimize the outage probability of an ML-based resource allocation system.
An ML binary classification predictor assists in selecting a resource satisfying the established outage criterium.
- Score: 2.1397655110395752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel loss function to minimize the outage probability of an
ML-based resource allocation system. A single-user multi-resource greedy
allocation strategy constitutes our application scenario, for which an ML
binary classification predictor assists in selecting a resource satisfying the
established outage criterium. While other resource allocation policies may be
suitable, they are not the focus of our study. Instead, our primary emphasis is
on theoretically developing this loss function and leveraging it to train an ML
model to address the outage probability challenge. With no access to future
channel state information, this predictor foresees each resource's likely
future outage status. When the predictor encounters a resource it believes will
be satisfactory, it allocates it to the user. Our main result establishes exact
and asymptotic expressions for this system's outage probability. These
expressions reveal that focusing solely on the optimization of the per-resource
outage probability conditioned on the ML predictor recommending resource
allocation (a strategy that appears to be most appropriate) may produce
inadequate predictors that reject every resource. They also reveal that
focusing on standard metrics, like precision, false-positive rate, or recall,
may not produce optimal predictors. With our result, we formulate a
theoretically optimal, differentiable loss function to train our predictor. We
then compare predictors trained using this and traditional loss functions
namely, binary cross-entropy (BCE), mean squared error (MSE), and mean absolute
error (MAE). In all scenarios, predictors trained using our novel loss function
provide superior outage probability performance. Moreover, in some cases, our
loss function outperforms predictors trained with BCE, MAE, and MSE by multiple
orders of magnitude.
Related papers
- Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - Enforcing Equity in Neural Climate Emulators [0.0]
We propose a custom loss function which punishes neural network emulators with unequal quality of predictions.
The loss function does not specify a particular definition of equity to bias the neural network towards.
Our results show that neural climate emulators trained with our loss function provide more equitable predictions.
arXiv Detail & Related papers (2024-06-28T03:47:54Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Distribution-free risk assessment of regression-based machine learning
algorithms [6.507711025292814]
We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction.
We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability.
arXiv Detail & Related papers (2023-10-05T13:57:24Z) - Probable Domain Generalization via Quantile Risk Minimization [90.15831047587302]
Domain generalization seeks predictors which perform well on unseen test distributions.
We propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability.
arXiv Detail & Related papers (2022-07-20T14:41:09Z) - An Explainable Regression Framework for Predicting Remaining Useful Life
of Machines [6.374451442486538]
This paper proposes an explainable regression framework for the prediction of machines' Remaining Useful Life (RUL)
We also evaluate several Machine Learning (ML) algorithms including classical and Neural Networks (NNs) based solutions for the task.
arXiv Detail & Related papers (2022-04-28T15:44:12Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - A new perspective on classification: optimally allocating limited
resources to uncertain tasks [4.169130102668252]
In credit card fraud detection, for instance, a bank can only assign a small subset of transactions to their fraud investigations team.
We argue that using classification to address task uncertainty is inherently suboptimal as it does not take into account the available capacity.
We present a novel solution using learning to rank by directly optimizing the assignment's expected profit given limited capacity.
arXiv Detail & Related papers (2022-02-09T10:14:45Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Optimal Resource Allocation for Serverless Queries [8.59568779761598]
Prior work focused on predicting peak allocation while ignoring aggressive trade-offs between resource allocation and run-time.
We introduce a system for optimal resource allocation that can predict performance with aggressive trade-offs, for both new and past observed queries.
arXiv Detail & Related papers (2021-07-19T02:55:48Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.