Abstract: Robust estimation of the probability vector is an important problem for the finite k cell multinomial model. When the probability vector is unrestricted, its estimate is equal to the vector of the observed proportions for all minimum disparity estimators (Lindsay (1994)). But, when the probabilities are functions of a parameter θ of dimension smaller than k, the estimates may differ significantly for different disparities. In particular, some procedures like the minimum Hellinger distance method may be substantially superior to the maximum likelihood estimator (MLE) or the minimum (Pearson's) chi-square estimator under systematic deviations from the model. All the minimum disparity estimators have optimal asymptotic efficiency properties. However, in many subclasses of disparities such as the Cressie-Read family more robust members of the class generally suffer a significant loss in small sample efficiency. In this paper we consider a correction which can lead to appreciable improvements in the small sample properties of the procedures, generally keeping their robustness properties intact. Exact results are presented for several multinomial models and a number of data examples are also considered.
Key words and phrases: Asymptotic efficiency, blended weight Hellinger disparity, Cressie-Read disparity, empty cells, residual adjustment function.