Deep Huber quantile regression networks
- URL: http://arxiv.org/abs/2306.10306v2
- Date: Fri, 18 Apr 2025 13:28:17 GMT
- Title: Deep Huber quantile regression networks
- Authors: Hristos Tyralis, Georgia Papacharalampous, Nilay Dogulu, Kwok P. Chun,
- Abstract summary: We introduce deep Huber quantile regression networks (DHQRN) that nest QRNN and ERNN as edge cases.<n>DHQRN can predict Huber quantiles, which are more general functionals in the sense that they nest quantiles and expectiles as limiting cases.<n>As a proof of concept, DHQRN are applied to predict house prices in Melbourne, Australia and Boston, United States.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Typical machine learning regression applications aim to report the mean or the median of the predictive probability distribution, via training with a squared or an absolute error scoring function. The importance of issuing predictions of more functionals of the predictive probability distribution (quantiles and expectiles) has been recognized as a means to quantify the uncertainty of the prediction. In deep learning (DL) applications, that is possible through quantile and expectile regression neural networks (QRNN and ERNN respectively). Here we introduce deep Huber quantile regression networks (DHQRN) that nest QRNN and ERNN as edge cases. DHQRN can predict Huber quantiles, which are more general functionals in the sense that they nest quantiles and expectiles as limiting cases. The main idea is to train a DL algorithm with the Huber quantile scoring function, which is consistent for the Huber quantile functional. As a proof of concept, DHQRN are applied to predict house prices in Melbourne, Australia and Boston, United States (US). In this context, predictive performances of three DL architectures are discussed along with evidential interpretation of results from two economic case studies. Additional simulation experiments and applications to real-world case studies using open datasets demonstrate a satisfactory absolute performance of DHQRN.
Related papers
- Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators.
Our approach leverages model linearization to push (Gaussian) weight-space uncertainty forward to the neural operator's predictions.
We show that this can be interpreted as a probabilistic version of the concept of currying from functional programming, yielding a function-valued (Gaussian) random process belief.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Improved uncertainty quantification for neural networks with Bayesian
last layer [0.0]
Uncertainty quantification is an important task in machine learning.
We present a reformulation of the log-marginal likelihood of a NN with BLL which allows for efficient training using backpropagation.
arXiv Detail & Related papers (2023-02-21T20:23:56Z) - IB-UQ: Information bottleneck based uncertainty quantification for
neural function regression and neural operator learning [11.5992081385106]
We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks.
We incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data.
We also propose a data augmentation based information bottleneck objective which can enhance the quality of the extrapolation uncertainty.
arXiv Detail & Related papers (2023-02-07T05:56:42Z) - Bagged Polynomial Regression and Neural Networks [0.0]
Series and dataset regression are able to approximate the same function classes as neural networks.
textitbagged regression (BPR) is an attractive alternative to neural networks.
BPR performs as well as neural networks in crop classification using satellite data.
arXiv Detail & Related papers (2022-05-17T19:55:56Z) - Understanding the Under-Coverage Bias in Uncertainty Estimation [58.03725169462616]
quantile regression tends to emphunder-cover than the desired coverage level in reality.
We prove that quantile regression suffers from an inherent under-coverage bias.
Our theory reveals that this under-coverage bias stems from a certain high-dimensional parameter estimation error.
arXiv Detail & Related papers (2021-06-10T06:11:55Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Flexible Model Aggregation for Quantile Regression [92.63075261170302]
Quantile regression is a fundamental problem in statistical learning motivated by a need to quantify uncertainty in predictions.
We investigate methods for aggregating any number of conditional quantile models.
All of the models we consider in this paper can be fit using modern deep learning toolkits.
arXiv Detail & Related papers (2021-02-26T23:21:16Z) - Quantile regression with deep ReLU Networks: Estimators and minimax
rates [4.522666263036413]
We study quantile regression with rectified linear unit (ReLU) neural networks.
We derive an upper bound on the expected mean squared error of a ReLU network.
These tight bounds imply ReLU networks with quantile regression achieve minimax rates for broad collections of function types.
arXiv Detail & Related papers (2020-10-16T08:34:04Z) - A Statistical Learning Assessment of Huber Regression [8.834480010537229]
We show that the usual risk consistency property of Huber regression estimators cannot guarantee their learnability in mean regression.
We also argue that Huber regression should be implemented in an adaptive way to perform mean regression.
We establish almost sure convergence rates for Huber regression estimators.
arXiv Detail & Related papers (2020-09-27T06:08:21Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.