Multi-target regression via output space quantization
- URL: http://arxiv.org/abs/2003.09896v1
- Date: Sun, 22 Mar 2020 13:57:40 GMT
- Title: Multi-target regression via output space quantization
- Authors: Eleftherios Spyromitros-Xioufis, Konstantinos Sechidis and Ioannis
Vlahavas
- Abstract summary: The proposed method, called MRQ, is based on the idea of quantizing the output space in order to transform the multiple continuous targets into one or more discrete ones.
Experiments on a large collection of benchmark datasets show that MRQ is both highly scalable and also competitive with the state-of-the-art in terms of accuracy.
- Score: 0.3007949058551534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-target regression is concerned with the prediction of multiple
continuous target variables using a shared set of predictors. Two key
challenges in multi-target regression are: (a) modelling target dependencies
and (b) scalability to large output spaces. In this paper, a new multi-target
regression method is proposed that tries to jointly address these challenges
via a novel problem transformation approach. The proposed method, called MRQ,
is based on the idea of quantizing the output space in order to transform the
multiple continuous targets into one or more discrete ones. Learning on the
transformed output space naturally enables modeling of target dependencies
while the quantization strategy can be flexibly parameterized to control the
trade-off between prediction accuracy and computational efficiency. Experiments
on a large collection of benchmark datasets show that MRQ is both highly
scalable and also competitive with the state-of-the-art in terms of accuracy.
In particular, an ensemble version of MRQ obtains the best overall accuracy,
while being an order of magnitude faster than the runner up method.
Related papers
- Q-VLM: Post-training Quantization for Large Vision-Language Models [73.19871905102545]
We propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference.
We mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy.
Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation.
arXiv Detail & Related papers (2024-10-10T17:02:48Z) - Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling [22.256068524699472]
In this work, we propose an Annealed Importance Sampling (AIS) approach to address these issues.
We combine the strengths of Sequential Monte Carlo samplers and VI to explore a wider range of posterior distributions and gradually approach the target distribution.
Experimental results on both toy and image datasets demonstrate that our method outperforms state-of-the-art methods in terms of tighter variational bounds, higher log-likelihoods, and more robust convergence.
arXiv Detail & Related papers (2024-08-13T08:09:05Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Multiple Hypothesis Dropout: Estimating the Parameters of Multi-Modal
Output Distributions [22.431244647796582]
This paper presents a Mixture of Multiple-Output functions (MoM) approach using a novel variant of dropout, Multiple Hypothesis Dropout.
Experiments on supervised learning problems illustrate that our approach outperforms existing solutions for reconstructing multimodal output distributions.
Additional studies on unsupervised learning problems show that estimating the parameters of latent posterior distributions within a discrete autoencoder significantly improves codebook efficiency, sample quality, precision and recall.
arXiv Detail & Related papers (2023-12-18T22:20:11Z) - Multi-Target XGBoostLSS Regression [91.3755431537592]
We present an extension of XGBoostLSS that models multiple targets and their dependencies in a probabilistic regression setting.
Our approach outperforms existing GBMs with respect to runtime and compares well in terms of accuracy.
arXiv Detail & Related papers (2022-10-13T08:26:14Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - Communication-Efficient Distributed Quantile Regression with Optimal
Statistical Guarantees [2.064612766965483]
We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions.
The difficulties are resolved through a double-smoothing approach that is applied to the local (at each data source) and global objective functions.
Despite the reliance on a delicate combination of local and global smoothing parameters, the quantile regression model is fully parametric.
arXiv Detail & Related papers (2021-10-25T17:09:59Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Quantile Surfaces -- Generalizing Quantile Regression to Multivariate
Targets [4.979758772307178]
Our approach is based on an extension of single-output quantile regression (QR) to multivariate-targets, called quantile surfaces (QS)
We present a novel two-stage process: In the first stage, we perform a deterministic point forecast (i.e., central tendency estimation)
Subsequently, we model the prediction uncertainty using QS involving neural networks called quantile surface regression neural networks (QSNN)
We evaluate our novel approach on synthetic data and two currently researched real-world challenges in two different domains: First, probabilistic forecasting for renewable energy power generation, second, short-term cyclists trajectory forecasting for
arXiv Detail & Related papers (2020-09-29T16:35:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.