Variance function estimation in regression model via aggregation
procedures
- URL: http://arxiv.org/abs/2110.02715v1
- Date: Wed, 6 Oct 2021 13:03:52 GMT
- Title: Variance function estimation in regression model via aggregation
procedures
- Authors: Ahmed Zaoui (LAMA)
- Abstract summary: We focus on two particular aggregation setting: Model Selection aggregation (MS) and Convex aggregation (C)
The construction of the estimator relies on a two-step procedure and requires two independent samples.
We show the consistency of the proposed method with respect to the L 2error both for MS and C aggregations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the regression problem, we consider the problem of estimating the variance
function by the means of aggregation methods. We focus on two particular
aggregation setting: Model Selection aggregation (MS) and Convex aggregation
(C) where the goal is to select the best candidate and to build the best convex
combination of candidates respectively among a collection of candidates. In
both cases, the construction of the estimator relies on a two-step procedure
and requires two independent samples. The first step exploits the first sample
to build the candidate estimators for the variance function by the
residual-based method and then the second dataset is used to perform the
aggregation step. We show the consistency of the proposed method with respect
to the L 2error both for MS and C aggregations. We evaluate the performance of
these two methods in the heteroscedastic model and illustrate their interest in
the regression problem with reject option.
Related papers
- Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - HJ-sampler: A Bayesian sampler for inverse problems of a stochastic process by leveraging Hamilton-Jacobi PDEs and score-based generative models [1.949927790632678]
This paper builds on the log transform known as the Cole-Hopf transform in Brownian motion contexts.
We develop a new algorithm, named the HJ-sampler, for inference for the inverse problem of a differential equation with given terminal observations.
arXiv Detail & Related papers (2024-09-15T05:30:54Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - A model-free feature selection technique of feature screening and random
forest based recursive feature elimination [0.0]
We propose a model-free feature selection method for ultra-high dimensional data with mass features.
We show that the proposed method is selection consistent and $L$ consistent under weak regularity conditions.
arXiv Detail & Related papers (2023-02-15T03:39:16Z) - Variable Clustering via Distributionally Robust Nodewise Regression [7.289979396903827]
We study a multi-factor block model for variable clustering and connect it to the regularized subspace clustering by formulating a distributionally robust version of the nodewise regression.
We derive a convex relaxation, provide guidance on selecting the size of the robust region, and hence the regularization weighting parameter, based on the data, and propose an ADMM algorithm for implementation.
arXiv Detail & Related papers (2022-12-15T16:23:25Z) - CARMS: Categorical-Antithetic-REINFORCE Multi-Sample Gradient Estimator [60.799183326613395]
We propose an unbiased estimator for categorical random variables based on multiple mutually negatively correlated (jointly antithetic) samples.
CARMS combines REINFORCE with copula based sampling to avoid duplicate samples and reduce its variance, while keeping the estimator unbiased using importance sampling.
We evaluate CARMS on several benchmark datasets on a generative modeling task, as well as a structured output prediction task, and find it to outperform competing methods including a strong self-control baseline.
arXiv Detail & Related papers (2021-10-26T20:14:30Z) - Machine Learning based optimization for interval uncertainty propagation
with application to vibro-acoustic models [0.0]
Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems.
One approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable.
The other builds iteratively a single training dataset.
arXiv Detail & Related papers (2021-06-21T15:57:11Z) - On the Use of Minimum Penalties in Statistical Learning [2.1320960069210475]
We propose a framework to simultaneously estimate regression coefficients associated with a multivariate regression model and relationships between outcome variables.
An iterative algorithm that generalizes current state art methods is proposed as a solution.
We extend the proposed MinPen framework to other exponential family loss functions, with a specific focus on multiple binomial responses.
arXiv Detail & Related papers (2021-06-09T16:15:46Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.