TY - JOUR
T1 - A note on the asymptotic variance at optimal levels of a bias-corrected Hill estimator
AU - Caeiro, Frederico Almeida Gião Gonçalves
PY - 2009/1/1
Y1 - 2009/1/1
N2 - For heavy tails, with a positive tail index gamma, classical tail index estimators, like the Hill estimator, are known to be quite sensitive to the number of top order statistics k used in the estimation, whereas second-order reduced-bias estimators show Much less sensitivity to changes in k. In the recent minimum-variance reduced-bias (MVRB) tail index estimators, the estimation of the second order parameters in the bias has been performed at a level k(1) of a larger order than that of the level k at which we compute the tail index estimators. Such a procedure enables us to keep the asymptotic variance of the new estimators equal to the asymptotic variance of the Hill estimator, for all k at which we can guarantee the asymptotic normality of the Hill statistics. These values of k, as well as larger values of k, will also enable us to guarantee the asymptotic normality of the reduced-bias estimators, but, to reach the minimal mean squared error of these MVRB estimators, we need to work with levels k and k(1) of the same order. In this note we derive the way the asymptotic variance varies as a function of q, the finite limiting value of k/k(1), as the sample size n increases to infinity. (C) 2008 Elsevier B.V. All rights reserved.
AB - For heavy tails, with a positive tail index gamma, classical tail index estimators, like the Hill estimator, are known to be quite sensitive to the number of top order statistics k used in the estimation, whereas second-order reduced-bias estimators show Much less sensitivity to changes in k. In the recent minimum-variance reduced-bias (MVRB) tail index estimators, the estimation of the second order parameters in the bias has been performed at a level k(1) of a larger order than that of the level k at which we compute the tail index estimators. Such a procedure enables us to keep the asymptotic variance of the new estimators equal to the asymptotic variance of the Hill estimator, for all k at which we can guarantee the asymptotic normality of the Hill statistics. These values of k, as well as larger values of k, will also enable us to guarantee the asymptotic normality of the reduced-bias estimators, but, to reach the minimal mean squared error of these MVRB estimators, we need to work with levels k and k(1) of the same order. In this note we derive the way the asymptotic variance varies as a function of q, the finite limiting value of k/k(1), as the sample size n increases to infinity. (C) 2008 Elsevier B.V. All rights reserved.
KW - tail
KW - index
U2 - 10.1016/j.spl.2008.08.016
DO - 10.1016/j.spl.2008.08.016
M3 - Article
SN - 0167-7152
VL - 79
SP - 295
EP - 303
JO - Statistics & Probability Letters
JF - Statistics & Probability Letters
IS - 3
ER -