Jackknife Empirical Likelihood Methods

Since Jing et al. [1] propose the jackknife empirical likelihood method for U-statistics, the method has been developed and applied to inference problems in statistical theory and other areas such as biostatistics, medical statistics and insurance. In this short review paper, we give an introduction on one sample and two-sample jackknife empirical methods and smoothed jackknife empirical likelihood methods and present a brief literature review on applications of these methods.


Introduction
The study of empirical likelihood method (ELM) dates back to Thomas & Grunkemeier [2] who introduce a nonparametric likelihood ratio test statistic for constructing confidence intervals for the survival probability for right censored data. The method has drawn much attention and has been extensively investigated and widely used to construct confidence regions and to test hypotheses after the work by Owen [3,4] for the mean vectors and functionals. The empirical likelihood is a non-parametric method, but it possesses some good properties of the parametric likelihood method, and it has many advantages over some classical and modern methods, such as the normal-approximation-based method and the bootstrap method. Computing the empirical likelihood ratio involves optimizing the likelihood function over n parameters, where n is the sample size. For linear functionals or estimating functions, the optimization problem can be easily solved by using the method of Lagrange multipliers. The major problem of extending the ELM to nonlinear functionals or nonlinear statistics is the difficulty of nonlinear optimizations. The research in this direction has been greatly stimulated since Jing et al. [1] propose a jackknife empirical likelihood (JEL) method for U-statistics. The main idea of the JEL method is to apply the ELM to the jackknife samples, and the parameters of interest become the means of jackknife samples. For jackknife method, see, e.g., Shao & Tu [5].  for some positive definite matrix . Σ The so-called jackknife

Jackknife empirical likelihood method
As in Tukey [6], one expects that i t s ′ are approximately independent. This motivates us to apply the standard empirical likelihood method to the jackknife sample { } 1 , , n t t  for constructing empirical likelihood confidence regions for .
θ The jackknife empirical likelihood function is defined as

Smoothed JEL
If the functionals of the empirical distributions are used to estimate the parameters of interest and the asymptotic covariance matrices in (2.1) depend on local properties such as the density functions of the underlying distributions, the sample covariance matrices based on resulting jackknife samples are usually not consistent estimates of the asymptotic covariances in the central limit theorem (2.1), and in this case, Wilks' theorem will fail to hold. One should consider smoothed JEL methods. The so-called smoothed JEL uses the functionals of the smoothed empirical distributions for F and G to generate jackknife samples, which can overcome the problem of variance inconsistency.

Applications
In addition to the work by Jing et al. [1] for one-and twosample U-statistics, there are a few applications of the standard JEL methods in statistics. Wang et al. [7] propose JEL based test for equality of two high dimensional means, Zhang et al. [8] investigate the population mean with ranked set samples. JEL based confidence intervals for the mean absolute deviation and difference of two Gini indices are studied by Zhao et al. [9] and Wang & Zhao [10], respectively. The JEL methods have been applied in many problems in insurance and actuarial sciences. For instance, JEL-based confidence intervals for copulas are proosed by Peng et al. [11], and Wang et al. [12]. Wilks' theorem for JEL methods for Spearmans rho and a class of risk measures are proved by Wang & Peng [13] and Peng, et al. [14], respectively. For the tail copulas and difference of two quantles, Wilks' theorem are shown to be valid for smoothed JEL methods by Peng & Qi [15] and Yang & Zhao [16,17].
In diagnostic medicine, the accuracy of a diagnostic test in discriminating diseased patients from non-diseased ones is measured by the receiver operating characteristic (ROC) curve when the response of a test is continuous. Let F and G be the distribution functions of the diseased and non-diseased populations, respectively. Then the ROC curve can be written as and Yang and Zhao [19] extend the method under the setup of missing data. Adimari & Chiogna [20] and An & Zhao [21] employ JEL methods for confidence intervals for partial areas under ROC curves and the difference of two volumes under ROC surfaces, respectively. JEL based confidence regions for quantities in sensitivity and specificity for continuous-scale diagnostic tests are investigated in Wang & Qin [22]. To construct confidence regions when many nuisance paramters are present, a profile empirical likelihood method has to be employed, which is computationally costly in general. Li et al. [23] and Peng [24] propose JEL method for estimating equations to avoid heavy computational burden. Further, Zhang et al. [25] propose jackknife-blockwise empirical likelihood methods for data under dependence.
JEL methods have also been applied in regression models. JEL based confidence intervals for the regression parameters in accelerated failure time model with censored observations and in linear transformation models under right censoring are studied by Bouadoumou et al. [26] and Yang et al. [27], respectively. JEL based confidence intervals for the error variances in linear mode land in partially linear varying-coefficient errors-in-variables models are investigated by Lin et al. [28] and Liu and Liang [29]. The JEL based intervals of mean with regression imputation is considered by Zhong & Chen [30]. In summary, a common feature for these applications is that the JEL methods maintain the advantages of empirical likelihood methods over normal approximation methods and perform very well for small samples [31]. Computationally JEL methods are easy and straight forward even for complicated statistical problems.