Robust Estimation of the Tail Index of a Single Parameter Pareto
Distribution from Grouped Data
- URL: http://arxiv.org/abs/2401.14593v4
- Date: Wed, 21 Feb 2024 03:28:11 GMT
- Title: Robust Estimation of the Tail Index of a Single Parameter Pareto
Distribution from Grouped Data
- Authors: Chudamani Poudyal
- Abstract summary: This paper introduces a novel robust estimation technique, the Method of Truncated Moments (MTuM)
Inferential justification of MTuM is established by employing the central limit theorem and validating them through a comprehensive simulation study.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerous robust estimators exist as alternatives to the maximum likelihood
estimator (MLE) when a completely observed ground-up loss severity sample
dataset is available. However, the options for robust alternatives to MLE
become significantly limited when dealing with grouped loss severity data, with
only a handful of methods like least squares, minimum Hellinger distance, and
optimal bounded influence function available. This paper introduces a novel
robust estimation technique, the Method of Truncated Moments (MTuM),
specifically designed to estimate the tail index of a Pareto distribution from
grouped data. Inferential justification of MTuM is established by employing the
central limit theorem and validating them through a comprehensive simulation
study.
Related papers
- Distributionally Robust Optimization as a Scalable Framework to Characterize Extreme Value Distributions [22.765095010254118]
The goal of this paper is to develop distributionally robust optimization (DRO) estimators, specifically for multidimensional Extreme Value Theory (EVT) statistics.
In order to mitigate over-conservative estimates while enhancing out-of-sample performance, we study DRO estimators informed by semi-parametric max-stable constraints in the space of point processes.
Both approaches are validated using synthetically generated data, recovering prescribed characteristics, and verifying the efficacy of the proposed techniques.
arXiv Detail & Related papers (2024-07-31T19:45:27Z) - Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Tailoring Language Generation Models under Total Variation Distance [55.89964205594829]
The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method.
We develop practical bounds to apply it to language generation.
We introduce the TaiLr objective that balances the tradeoff of estimating TVD.
arXiv Detail & Related papers (2023-02-26T16:32:52Z) - Finite-Sample Guarantees for High-Dimensional DML [0.0]
This paper gives novel finite-sample guarantees for joint inference on high-dimensional DML.
These guarantees are useful to applied researchers, as they are informative about how far off the coverage of joint confidence bands can be from the nominal level.
arXiv Detail & Related papers (2022-06-15T08:48:58Z) - Keep it Tighter -- A Story on Analytical Mean Embeddings [0.6445605125467574]
Kernel techniques are among the most popular and flexible approaches in data science.
Mean embedding gives rise to a divergence measure referred to as maximum mean discrepancy (MMD)
In this paper we focus on the problem of MMD estimation when the mean embedding of one of the underlying distributions is available analytically.
arXiv Detail & Related papers (2021-10-15T21:29:27Z) - Estimation of Local Average Treatment Effect by Data Combination [3.655021726150368]
It is important to estimate the local average treatment effect (LATE) when compliance with a treatment assignment is incomplete.
Previously proposed methods for LATE estimation required all relevant variables to be jointly observed in a single dataset.
We propose a weighted least squares estimator that enables simpler model selection by avoiding the minimax objective formulation.
arXiv Detail & Related papers (2021-09-11T03:51:48Z) - Statistical Analysis of Wasserstein Distributionally Robust Estimators [9.208007322096535]
We consider statistical methods which invoke a min-max distributionally robust formulation to extract good out-of-sample performance in data-driven optimization and learning problems.
The resulting Distributionally Robust Optimization (DRO) formulations are specified using optimal transportation phenomena.
This tutorial is devoted to insights into the nature of the adversarials selected by the min-max formulations and additional applications of optimal transport projections.
arXiv Detail & Related papers (2021-08-04T15:45:47Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Sparse Feature Selection Makes Batch Reinforcement Learning More Sample
Efficient [62.24615324523435]
This paper provides a statistical analysis of high-dimensional batch Reinforcement Learning (RL) using sparse linear function approximation.
When there is a large number of candidate features, our result sheds light on the fact that sparsity-aware methods can make batch RL more sample efficient.
arXiv Detail & Related papers (2020-11-08T16:48:02Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.