Abstract: In this article, we study convergence properties of the method of penalization and related estimates. A penalized estimate is defined as an optimizer of a scaled criterion with a penalty that penalizes undesirable properties of the parameters. We develop some exponential probability bounds for the penalized likelihood ratios with a general penalty. Based on these inequalities, rates of convergence of the penalized estimates can be quantified. When convergence is measured by the Hellinger distance, the rate of convergence of the penalized maximum likelihood estimate depends only on the size of the parameter space and the penalization coefficient. We also explore the role of penalty in the penalization process, especially its relationship with the convergence properties and its connection with Bayesian analysis. We illustrate the theory by several examples.
Key words and phrases: Convergence properties, exponential bound, penalization, posterior distribution.