Statistica Sinica: Volume 27, Number 2, April 2017This is an example of an RSS feedhttp://www3.stat.sinica.edu.tw/statistica/Fri, 17 March 2017 00:01:00 +0000 Fri, 17 March 2017 00:01:00 +00001800
/statistica/J27N2/J27N21/J27N21.html
ESTIMATING COMPONENT RELIABILITY BASED ON FAILURE TIME DATA FROM A SYSTEM OF UNKNOWN DESIGN Y. Jin, Peter Gavin Hall, Jiming Jiang and Francisco J. Samaniego 479-499<span style='font-size=12pt;'><center>Abstract</center> Suppose that identical systems are tested until failure and that each system is based on components whose lifetimes are independently and identically distributed with common continuous distribution function and survival function . Under the assumption that the system design is known, Bhattacharya and Samaniego (2010) obtained the nonparametric maximum likelihood estimate of based on the observed system failure times and characterized its asymptotic behavior. The estimator studied in that paper has the form where is the system reliability polynomial (see Barlow and Proshan (1981)) and is the empirical survival function of the system lifetimes . To treat this estimation problem when the system design is unknown, the design must be estimated from data. In this paper, we assume that auxiliary data in the form of a variable , the number of failed components at the time of system failure, is available along with the system lifetime. Such data is typically available from a subsequent autopsy. The problem considered here is motivated by the fact that component reliability under field conditions is often not easily estimated through controlled laboratory tests. The data permits the estimation of the reliability polynomial (through the use of system signatures”- Samaniego (2007)). Denoting the estimated polynomial as , we study the properties of the estimator . Our main results include (1) is a n -consistent estimator of the component reliability function , (2) the asymptotic distribution of is normal and its asymptotic variance is given in closed form, and (3) the asymptotic variance of , based on the augmented data , is uniformly no greater than the asymptotic variance of , based on the data and the assumption that is known. This latter, perhaps surprising, result is confirmed in a variety of simulations and is illuminated through further heuristic considerations and further analysis.<p>Key words and phrases: Asymptotic efficiency, asymptotic normality, coherentsystem, component and system reliability, consistency, nonparametric estimation,NPMLE, nuisance parameter, system signature.</span>
/statistica/J27N2/J27N210/J27N210.html
ON CONSTRUCTION OF MARGINALLY COUPLED DESIGNS Yuanzhen He, C. Devon Lin and Fasheng Sun 665-683<span style='font-size=12pt;'><center>Abstract</center> Intended for computer experiments with both qualitative and quantitativefactors, marginally coupled designs were introduced by Deng, Hung and Lin (2015)as a more economical strategy than the original, sliced space-filling designs. Amongthe designs constructed in Deng, Hung and Lin (2015), the corresponding designsfor quantitative factors possess only the one-dimensional space-filling property withrespect to each level of any factor in designs for qualitative factors. In addition,their designs for quantitative factors have clustered points. To avoid clustered points and enhance two- and higher-dimensional space-filling property in designs forquantitative factors, we propose three approaches to construct marginally coupleddesigns. Theoretical results of marginally coupled designs are also derived. <p>Key words and phrases: Cascading Latin hypercube, completely resolvable, Latinhypercube, orthogonal array, projection, Rao-Hamming construction, space-filling.</span>
/statistica/J27N2/J27N211/J27N211.html
RANDOM THRESHOLD DRIVEN TAIL DEPENDENCE MEASURES WITH APPLICATION TO PRECIPITATION DATA ANALYSIS Zhengjun Zhang, Chunming Zhang and Qiurong Cui 685-709<span style='font-size=12pt;'><center>Abstract</center> This paper first studies the theoretical properties of the tail quotientcorrelation coefficient (TQCC) which was proposed to measure tail dependencebetween two random variables. By introducing random thresholds in TQCC, anapproximation theory between conditional tail probabilities is established. Thenew random threshold-driven TQCC can be used to test the null hypothesis of tailindependence under which TQCC test statistics are shown to follow a Chi-squareddistribution under two general scenarios. The TQCC is shown to be consistentunder the alternative hypothesis of tail dependence with a general approximationof max-stable distribution. Second, we apply TQCC to investigate tail dependenciesof a large scale problem of daily precipitation in the continental US. Our results,from the perspective of tail dependence, reveal nonstationarity, spatial clusters, andtail dependence from the precipitations across the continental US. <p>Key words and phrases: Climate extremes, conditional tail probability approximation,extreme value theory, hypothesis testing, nonlinear dependence.</span>
/statistica/J27N2/J27N212/J27N212.html
PENALIZED LIKELIHOOD FOR LOGISTIC-NORMAL MIXTURE MODELS WITH UNEQUAL VARIANCES Juan Shen, Yingchuan Wang and Xuming He 711-731<span style='font-size=12pt;'><center>Abstract</center> Subgroup analysis with unspecified subgroup memberships has receivedincreasing attention in recent years. In Shen and He (2015), a structured logisticnormalmixture model was proposed to characterize the subgroup distributions andthe subgroup membership simultaneously, but under the assumption that the subgroupsdiffer only in the means. In this paper, we consider a penalized likelihoodapproach for more general cases with heterogeneous subgroup variances. Despitesubstantial technical complications in the development of the statistical theory, weshow that the penalized likelihood inference for the existence of subgroups and forthe estimation of subgroup membership can be carried out in the existing framework.Empirical results with a simulation study and two data examples demonstratethe usefulness of the proposed method. <p>Key words and phrases: EM algorithm; heterogeneous components; homogeneitytest; likelihood ratio test; mixture models; subgroup identification.</span>
/statistica/J27N2/J27N213/J27N213.html
PENALIZED LIKELIHOOD FOR LOGISTIC-NORMAL MIXTURE MODELS WITH UNEQUAL VARIANCES Juan Shen, Yingchuan Wang and Xuming He 711-731<span style='font-size=12pt;'><center>Abstract</center> The issues of model-based clustering and classification of longitudinaldata have received increasing attention in recent years. In this paper, we proposea finite mixture of multivariate t linear mixed-effects model (FM-MtLMM) foranalyzing longitudinally measured multi-outcome data arisen from more than oneheterogeneous sub-population. The motivation behind this work comes from acohort study of patients with primary biliary cirrhosis, where the interest is inclassifying new patients into two or more prognostic groups on the basis of theirlongitudinally observed bilirubin and albumin levels. The proposed FM-MtLMMoffers robustness and flexibility to accommodate fat tails or atypical observationscontained in one or several of the groups. An efficient alternating expectationconditional maximization (AECM) algorithm is employed for the computation ofmaximum likelihood estimates of parameters. The calculation of standard errorsis effected by an information-based method. Practical techniques for clusteringof multivariate longitudinal data, estimation of random effects, and classificationof future patients are also provided. The methodology is illustrated by analyzingMayo Clinic Primary Biliary Cirrhosis sequential (PBCseq) data and a simulationstudy. <p>Key words and phrases: AECM algorithm, clustering multiple longitudinal profiles,heavy-tailed distribution, maximum likelihood estimation, mixture modeling.</span>
/statistica/J27N2/J27N214/J27N214.html
HYPOTHESIS TESTING IN THE PRESENCE OF MULTIPLE SAMPLES UNDER DENSITY RATIO MODELS Song Cai, Jiahua Chen and James V. Zidek 761-783<span style='font-size=12pt;'><center>Abstract</center> This paper presents a hypothesis testing method given independent samplesfrom a number of connected populations. The method is motivated by aforestry project for monitoring change in the strength of lumber. Traditional practicehas been built upon nonparametric methods which ignore the fact that thesepopulations are connected. By pooling the information in multiple samples througha density ratio model, the proposed empirical likelihood method leads to more efficientinferences and therefore reduces the cost in applications. The new test has aclassical chi-square null limiting distribution. Its power function is obtained undera class of local alternatives. The local power is found increased even when some underlyingpopulations are unrelated to the hypothesis of interest. Simulation studiesconfirm that this test has better power properties than potential competitors, andis robust to model misspecification. An application example to lumber strength isincluded. <p>Key words and phrases: Dual empirical likelihood, empirical likelihood ratio test,information pooling, local power, long term monitoring, lumber quality, semiparametricinference.</span>
/statistica/J27N2/J27N215/J27N215.html
CONTROL FUNCTION ASSISTED IPW ESTIMATION WITH A SECONDARY OUTCOME IN CASE-CONTROL STUDIES Tamar Sofer, Marilyn C. Cornelis, Peter Kraft and Eric J. Tchetgen Tchetgen 785-804<span style='font-size=12pt;'><center>Abstract</center> Case-control studies are designed to study associations between risk factorsand a single, primary outcome. Information about additional, secondary outcomesis also collected, but association studies targeting such secondary outcomesshould account for the case-control sampling scheme, or otherwise results may bebiased. Often, one uses inverse probability weighted (IPW) estimators to estimatepopulation effects in such studies. IPW estimators are robust, as they only requirecorrect specification of the mean regression model of the secondary outcomeon covariates and knowledge of the disease prevalence. However, IPW estimatorsare inefficient relative to estimators that make additional assumptions about thedata generating mechanism. We propose a class of estimators for the effect of riskfactors on a secondary outcome in case-control studies that combine IPW with anadditional modeling assumption: specification of the disease outcome probabilitymodel. We incorporate this model via a mean zero control function. We derivethe class of all regular and asymptotically linear estimators corresponding to ourmodeling assumption when the secondary outcome mean is modeled using eitherthe identity or the log link. We find the efficient estimator in our class of estimatorsand show that it reduces to standard IPW when the model for the primary diseaseoutcome is unrestricted, and is more efficient than standard IPW when the modelis either parametric or semiparametric. <p>Key words and phrases: Case-control study, genetic association studies, inverseprobability weighting, semiparametric inference.</span>
/statistica/J27N2/J27N216/J27N216.html
ON SOME MATÉRN COVARIANCE FUNCTIONS FOR SPATIO-TEMPORAL RANDOM FIELDS Ryan H. L. Ip and W. K. Li 805-822<span style='font-size=12pt;'><center>Abstract</center> The Matérn class is an important class of covariance functions in spatialstatistics. With the recent flourishing trend in modelling spatio-temporal data, indepththeoretical development of spatio-temporal covariograms is needed. In thispaper, theories under the infill asymptotic framework concerning estimation issuesof a generally non-separable Matérn class of spatio-temporal covariance function ispresented. It is found that not all parameters can be estimated consistently whilequantities that can be estimated consistently are found based on equivalence andorthogonality of Gaussian measures. The micro-ergodic parameters are found to bedifferent when the degrees of separability between the space and time componentsare different. For the computation, an easy-to-implement estimation procedure isgiven. Simulation studies are conducted to show how well the asymptotic resultsapply when the sample size is moderate. A set of air pollution data is used todemonstrate the usefulness of the estimation procedure suggested. <p>Key words and phrases: Gaussian measures, infill asymptotics, micro-ergodic parameters,space-time data.</span>
/statistica/J27N2/J27N217/J27N217.html
D-OPTIMALITY OF GROUP TESTING FOR JOINT ESTIMATION OF CORRELATED RARE DISEASES WITH MISCLASSIFICATION Qizhai Li, Aiyi Liu and Wenjun Xiong 823-838<span style='font-size=12pt;'><center>Abstract</center> The D-optimal criterion is used to derive an optimality property of grouptesting in estimation of the prevalence of two rare correlated diseases when thedisease statuses are classified with error. Exact ranges of disease prevalence areobtained in which group testing is more efficient than conventional methods ofrandom sampling. <p>Key words and phrases: Binary outcomes, classification error, D-optimal criterion,group testing, maximum likelihood estimate, prevalence.</span>
/statistica/J27N2/J27N218/J27N218.html
BAYESIAN NONPARAMETRIC INFERENCE FOR DISCOVERY PROBABILITIES: CREDIBLE INTERVALS AND LARGE SAMPLE ASYMPTOTICS Julyan Arbel, Stefano Favaro, Bernardo Nipoti and Yee Whye Teh 839-858<span style='font-size=12pt;'><center>Abstract</center> Given a sample of size n from a population of individuals belonging todifferent species with unknown proportions, a problem of practical interest consistsin making inference on the probability that the ( n + 1 ) -th draw coincides with a species with frequency in the sample, for any . This paper contributes to the methodology of Bayesian nonparametric inference for Specifically, under the general framework of Gibbs-type priors we show how toderive credible intervals for a Bayesian nonparametric estimation of , andwe investigate the large asymptotic behaviour of such an estimator. Of particularinterest are special cases of our results obtained under the specification ofthe two parameter Poisson irichlet prior and the normalized generalized Gammaprior. With respect for these prior specifications, the proposed results are illustratedthrough a simulation study and a benchmark Expressed Sequence Tags dataset. Tothe best our knowledge, this provides the first comparative study between the two parameter Poisson irichlet prior and the normalized generalized Gamma prior inthe context of Bayesian nonparemetric inference for .<p>Key words and phrases: Asymptotics, Bayesian nonparametrics, credible intervals, discovery probability, Gibbs-type priors, Good uring estimator, normalized generalized Gamma prior, smoothing technique, two-parameter Poisson irichlet.</span>
/statistica/J27N2/J27N219/J27N219.html
ESTIMATION UNDER MODEL UNCERTAINTY Nicholas T. Longford 859-877<span style='font-size=12pt;'><center>Abstract</center> Model selection has had a virtual monopoly on dealing with model uncertaintyever since models were identified as important conduits for statisticalinference. Model averaging alleviates some of its deficiencies, but does not offer apractical solution in all settings. We propose an alternative based on linear combinationsof the candidate models?estimators. The general proposal is elaboratedfor ordinary regression and is illustrated with examples. Some estimators based oninvalid models contribute to efficient estimation of certain quantities. <p>Key words and phrases: Basis estimator, composite estimation, model selection,ordinary regression, propensity matching.</span>
/statistica/J27N2/J27N22/J27N22.html
HIERARCHICAL SELECTION OF FIXED AND RANDOM EFFECTS IN GENERALIZED LINEAR MIXED MODELS Francis K. C. Hui, Samuel Müller and A. H. Welsh 501-518<span style='font-size=12pt;'><center>Abstract</center> In many applications of generalized linear mixed models (GLMMs), thereis a hierarchical structure in the effects that needs to be taken into account whenperforming variable selection. A prime example of this is when fitting mixed modelsto longitudinal data, where it is usual for covariates to be included as only fixedeffects or as composite (fixed and random) effects. In this article, we propose thefirst regularization method that can deal with large numbers of candidate GLMMswhile preserving this hierarchical structure: CREPE (Composite Random EffectsPEnalty) for joint selection in mixed models. CREPE induces sparsity in a hierarchicalmanner, as the fixed effect for a covariate is shrunk to zero only if thecorresponding random effect is or has already been shrunk to zero. In the settingwhere the number of fixed effects grow at a slower rate than the number of clusters,we show that CREPE is selection consistent for both fixed and random effects,and attains the oracle property. Simulations show that CREPE outperforms somecurrently available penalized methods for mixed models. <p>Key words and phrases: Fixed effects, generalized linear mixed models, LASSO,penalized likelihood, random effects, variable selection.</span>
/statistica/J27N2/J27N220/J27N220.html
OPTIMAL ESTIMATION OF A QUADRATIC FUNCTIONAL UNDER THE GAUSSIAN TWO-SEQUENCE MODEL T. Tony Cai and Xin Lu Tan 879-906<span style='font-size=12pt;'><center>Abstract</center> This paper studies the problem of optimal estimation of a quadratic functionalof two normal mean vectors, , with a particularfocus on the case where both mean vectors are sparse. We propose optimal estimatorsof for different regimes and establish the minimax rates of convergenceover a family of parameter spaces. The optimal rates exhibit interesting phasetransitions in this family. We also include a simulation study to complement thetheoretical results in the paper. <p>Key words and phrases: Gaussian sequence model, minimax estimation, quadraticfunctional, signal detection, sparse means.</span>
/statistica/J27N2/J27N221/J27N221.html
EXTREME VERSIONS OF WANG RISK MEASURES AND THEIR ESTIMATION FOR HEAVY-TAILED DISTRIBUTIONS Jonathan El Methni and Gilles Stupfler 907-930<span style='font-size=12pt;'><center>Abstract</center> In this paper, we build simple extreme analogues of Wang distortionrisk measures and we show how this makes it possible to consider many standardmeasures of extreme risk, including the usual extreme Value-at-Risk or Tail-Valueat-Risk, as well as the recently introduced extreme Conditional Tail Moment, ina unified framework. We then introduce adapted estimators when the randomvariable of interest has a heavy-tailed distribution and we prove their asymptoticnormality. The finite sample performance of our estimators is assessed in a simulationstudy and we showcase our techniques on two sets of data. <p>Key words and phrases: Asymptotic normality, conditional tail moment, distortionrisk measure, extreme-value statistics, heavy-tailed distribution.</span>
/statistica/J27N2/J27N222/J27N222.html
EXTREME VERSIONS OF WANG RISK MEASURES AND THEIR ESTIMATION FOR HEAVY-TAILED DISTRIBUTIONS Jonathan El Methni and Gilles Stupfler 907-930<span style='font-size=12pt;'><center>Abstract</center> In semiparametric and nonparametric statistical inference, the asymptotic normality of estimators has been widely established when they are -consistent. In many applications, nonparametric estimators are not able to achieve this rate. We have a result on the asymptotic normality of nonparametric M - estimators that can be used if the rate of convergence of an estimator is or slower. We apply this to study the asymptotic distribution of sieve estimators of functionals of a mean function from a counting process, and develop nonparametric tests for the problem of treatment comparison with panel count data. The test statistics are constructed with spline likelihood estimators instead of nonparametric likelihood estimators. The new tests have a more general and simpler structure and are easy to implement. Simulation studies show that the proposed tests perform well even for small sample sizes. We find that a new test is always powerful for all the situations considered and is thus robust. For illustration, a data analysis example is provided. <p>Key words and phrases: Asymptotic normality, M- estimators, nonparametric maximum likelihood, nonparametric maximum pseudo-likelihood, nonparametric tests, spline.</span>
/statistica/J27N2/J27N23/J27N23.html
EXTREME VERSIONS OF WANG RISK MEASURES AND THEIR ESTIMATION FOR HEAVY-TAILED DISTRIBUTIONS Dalibor Volný and Michael Woodroofe 519-533<span style='font-size=12pt;'><center>Abstract</center> We establish a sufficient condition under which a central limit theorem for a a stationary linear process is quenched. We find a stationary linear process for which the Maxwell-Woodroofe condition is satisfied, , converges to the standard normal law, and the convergence is not quenched;the weak invariance principle does not hold.<p>Key words and phrases: Hannan condition, martingale differences, Maxwell-Woodrofe condition, quenched central limit theorem, stationary linear process, weak invariance principle.</span>
/statistica/J27N2/J27N24/J27N24.html
BAYESIAN NONPARAMETRIC INFERENCE ON THE STIEFEL MANIFOLD Lizhen Lin, Vinayak Rao and David Dunson 535-553<span style='font-size=12pt;'><center>Abstract</center> The Stiefel manifold is the space of all orthonormal matrices, with the hypersphere and the space of all orthogonal matrices constituting special cases. In modeling data lying on the Stiefel manifold, parametric distributions such as the matrix Langevin distribution are often used; however, model misspecification is a concern and it is desirable to have nonparametric alternatives. Current nonparametric methods are mainly Fréchet-mean based. We take a fully generative nonparametric approach, which relies on mixing parametric kernels such as the matrix Langevin. The proposed kernel mixtures can approximate a large class of distributions on the Stiefel manifold, and we develop theory showing posterior consistency. While there exists work developing general posterior consistency results, extending these results to this particular manifold requires substantial new theory. Posterior inference is illustrated on a dataset of near-Earth objects. <p>Key words and phrases: Bayesian nonparametric, kernel mixture, matrix Langevin, orthonormal matrices, posterior consistency, Stiefel manifold, von Mises Fisher.</span>
/statistica/J27N2/J27N25/J27N25.html
SINGLE-INDEX MODEL FOR INHOMOGENEOUS SPATIAL POINT PROCESSES Yixin Fang and Ji Meng Loh 555-574<span style='font-size=12pt;'><center>Abstract</center> We introduce a single index model for the intensity of an inhomogeneousspatial point process, relating the intensity function to an unknown function ρ ofa linear combination of measurements of a p -dimensional spatial covariate process.Such a model extends and generalizes a commonly used model where ρ is known.We derive an estimating procedure for ρ and the coefficient parameters β and showconsistency and asymptotic normality of estimates of β under some regularity assumptions.We present results of some simulation studies showing the effectivenessof the procedure. Finally, we apply the procedure to a dataset of fast food restaurantlocations in New York City. <p>Key words and phrases: Asymptotic normality, consistency, fast food restaurantdata, single-index model, spatial point processes.</span>
/statistica/J27N2/J27N26/J27N26.html
SEQUENTIAL CHANGE-POINT DETECTION IN TIME SERIES MODELS BASED ON PAIRWISE LIKELIHOOD Sze Him Leung, Wai Leong Ng and Chun Yip Yau 575-605<span style='font-size=12pt;'><center>Abstract</center> The paper proposes a sequential monitoring scheme for detecting changesin parameter values for general time series models using pairwise likelihood. Underthis scheme, a change-point is declared when the cumulative sum of the firstderivatives of pairwise likelihood exceeds a certain boundary function. The schemeis shown to have asymptotically zero Type II error with a prescribed level of TypeI error. With the use of pairwise likelihood, the scheme is applicable to many complicatedtime series models in a computationally efficient manner. For example,the scheme covers time series models involving latent processes, such as stochasticvolatility models and Poisson regression models with log link function. <p>Key words and phrases: Composite likelihood, on-line detection, Poisson regressionmodel, quickest detection, sequential monitoring, stochastic volatility.</span>
/statistica/J27N2/J27N27/J27N27.html
SEQUENTIAL CHANGE-POINT DETECTION IN TIME SERIES MODELS BASED ON PAIRWISE LIKELIHOOD Sze Him Leung, Wai Leong Ng and Chun Yip Yau 575-605<span style='font-size=12pt;'><center>Abstract</center> For quantitative factors, the minimum β -aberration criterion is commonlyused for examining the geometric isomorphism and searching for optimaldesigns. In this paper, we investigate the connection between the minimum β -aberration criterion and the minimum contamination criterion. Results reveal thatin ranking designs by the two criteria, the optimal designs selected by them can bedifferent. We provide statistical justifications showing that the minimum contaminationcriterion controls the expected total mean square error of the estimationand demonstrate that it is more powerful than the minimum β -aberration criterionfor identifying geometrically nonisomorphic designs.<p>Key words and phrases: Alias matrix, generalized minimum aberration, geometric isomorphism, indicator function, J-characteristics. </span>
/statistica/J27N2/J27N28/J27N28.html
SEMIPARAMETRIC ACCELERATED INTENSITY MODELS FOR CORRELATED RECURRENT AND TERMINAL EVENTS Sangbum Choi, Xuelin Huang, Hyunsu Ju and Jing Ning 625-643<span style='font-size=12pt;'><center>Abstract</center> In clinical and epidemiological studies, recurrent events can arise when asubject repeatedly experiences the event of interest. Often, a terminal event such asdeath may preclude further occurrence of recurrent events in an informative mannersuch that the terminal event is strongly correlated with the recurrent event process.In this article, we propose a semiparametric joint analysis of correlated recurrentand terminal events. Specifically, we consider an accelerated intensity model forthe recurrent events and an accelerated failure time model for the terminal event.We assess the dependency between the two event processes through a commonlyused log-normal or gamma shared frailty. To estimate regression parameters andunspecified baseline intensity functions, we develop an EM algorithm with kernelsmoothing adapted for both intensity functions, and perform variance estimationvia numerical differentiation of the profile likelihoods. We evaluated the finitesample performance of the proposed method via simulation studies for both gammaand log-normal frailty models, and applied our method to the analysis of tumorrecurrences and patient survival times in a soft tissue sarcoma study. <p>Key words and phrases: Accelerated intensity regression, frailty model, informativecensoring, kernel smoothing, nonparametric likelihood.</span>
/statistica/J27N2/J27N29/J27N29.html
TESTING FOR UNIFORM STOCHASTIC ORDERING VIA EMPIRICAL LIKELIHOOD UNDER RIGHT CENSORING Hammou El Barmi 645-664<span style='font-size=12pt;'><center>Abstract</center> Empirical likelihood based tests for the presence of uniform stochastic ordering(or hazard rate ordering) among two univariate distributions functions (DFs)are developed when the data are right censored in the one- and two-sample cases.The proposed test statistics are formed by taking the supremum of some functionalof localized empirical likelihood test statistics. The null asymptotic distributions ofthese test statistics are distribution-free and have simple representations in termsof a standard Brownian motion. Simulations show that the tests we propose outperform,in terms of power, the one sided log-rank test at many distributions. Thestochastic ordering case is shown to be a special case of our procedure. We illustrateour theoretical results with an example. <p>Key words and phrases: Empirical likelihood, stochastic ordering, uniform stochasticordering.</span>