An Expectation Maximization (EM) algorithm is a method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM is a way to alternates between performing an expectation to compute the expectation of the log-likelihood calculated by using the current estimate for variables and a maximization step (M) which is used to calculate parameters maximizing the expected log-likelihood in an E step. EM is a branch of statistical mathematics and has an importance in statistical mathematics.

Expectation-Theory-Homework-HelpExpectation-maximization was explained first time in 1977 by Arthur Dempster, Donald Rubin and Nan Laird (also known as Dempster-Laird-Rubin) and they published their paper in the Journal of the Royal Statistical Society. Later the detailed treatment of the EM method for exponential families was discovered by Rolf Sundberg. Expectation-maximization is frequently used for data clustering in computer science and machine learning and helps in natural language processing. Besides this, EM is an essential to estimate item parameters and latent abilities of models of item response theory in psychometrics. Likewise EM is also used in medical image reconstruction including single photon emission computed tomography and positron emission tomography.

EM has two major variations depending on the methods proposed to accelerate (e.g. conjugate gradient and modified Newton-Raphson). These are— Expectation conditional maximization (ECM) and Generalized expectation maximization (GEM) algorithm.