Abstract: The EM algorithm is one of the most commonly used methods of maximum likelihood estimation. In many practical applications, it converges at a frustratingly slow linear rate. The current paper considers an acceleration of the EM algorithm based on classical quasi-Newton optimization techniques. This acceleration seeks to steer the EM algorithm gradually toward the Newton-Raphson algorithm, which has a quadratic rate of convergence. The fundamental difference between the current algorithm and a naive quasi-Newton algorithm is that the early stages of the current algorithm resemble the EM algorithm rather than steepest ascent. Numerical examples involving the Dirichlet distribution, a mixture of Poisson distributions, and a repeated measures model illustrate the potential of the current algorithm.
Key words and phrases: Dirichlet distribution, maximum likelihood, repeated measures model, secant approximation.