A note on the asymptotic variance at optimal levels of a bias-corrected Hill estimator

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)


For heavy tails, with a positive tail index gamma, classical tail index estimators, like the Hill estimator, are known to be quite sensitive to the number of top order statistics k used in the estimation, whereas second-order reduced-bias estimators show Much less sensitivity to changes in k. In the recent minimum-variance reduced-bias (MVRB) tail index estimators, the estimation of the second order parameters in the bias has been performed at a level k(1) of a larger order than that of the level k at which we compute the tail index estimators. Such a procedure enables us to keep the asymptotic variance of the new estimators equal to the asymptotic variance of the Hill estimator, for all k at which we can guarantee the asymptotic normality of the Hill statistics. These values of k, as well as larger values of k, will also enable us to guarantee the asymptotic normality of the reduced-bias estimators, but, to reach the minimal mean squared error of these MVRB estimators, we need to work with levels k and k(1) of the same order. In this note we derive the way the asymptotic variance varies as a function of q, the finite limiting value of k/k(1), as the sample size n increases to infinity. (C) 2008 Elsevier B.V. All rights reserved.
Original languageUnknown
Pages (from-to)295-303
JournalStatistics & Probability Letters
Issue number3
Publication statusPublished - 1 Jan 2009

Cite this