O(1) 的小乐

Job Hunting

公告

记录我的生活和工作。。。
<2010年9月>
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

统计

  • 随笔 - 182
  • 文章 - 1
  • 评论 - 41
  • 引用 - 0

留言簿(10)

随笔分类(70)

随笔档案(182)

文章档案(1)

如影随形

搜索

  •  

最新随笔

最新评论

阅读排行榜

评论排行榜

Expectation-maximization algorithm EM算法

     In statistics, an expectation-maximization (EM) algorithm is a method for finding maximum likelihood ormaximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. EM is an iterative method which alternates between performing an expectation (E) step, which computes the expectation of the log-likelihood evaluated using the current estimate for the latent variables, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

 

  EM算法可用于很多问题的框架,其中需要估计一组描述概率分布的参数\boldsymbol\theta,只给定了由此产生的全部数据中能观察到的一部分!

  EM算法是一种迭代算法,它由基本的两个步骤组成:

  E step:估计期望步骤

  使用对隐变量的现有估计来计算log极大似然

  M step: 最大化期望步骤

  计算一个对隐变量更好的估计,使其最大化log似然函数对隐变量Y的期望。用新计算的隐变量参数代替之前的对隐变量的估计,进行下一步的迭代!

 

 

观测数据:观测到的随机变量X的IID样本:

image

缺失数据:未观测到的隐含变量(隐变量)Y的值:

image

完整数据: 包含观测到的随机变量X和未观测到的随机变量Y的数据,Z=(X,Y)

 

似然函数:(似然函数的几种写法)

JL})D_HBNI489~H}GCRMWVJ

log似然函数为:

image

E step:用对隐变量的现有估计\boldsymbol\theta^{(t)}计算隐变量Y的期望

  image

其中需要用到贝叶斯公式:

image 

M step:最大化期望,获得对隐变量更好的估计

image

 

维基中的表述是这样子:

Given a statistical model consisting of a set \mathbf{X} of observed data, a set of unobserved latent data or missing values Y, and a vector of unknown parameters \boldsymbol\theta, along with a likelihood function L(\boldsymbol\theta; \mathbf{X}, \mathbf{Z}) = p(\mathbf{X}, \mathbf{Z}|\boldsymbol\theta), the maximum likelihood estimate (MLE) of the unknown parameters is determined by the marginal likelihood of the observed data 

       CR%M2I[QD88[N5$3(H))%ZR

However, this quantity is often intractable.

The EM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:

Expectation step (E-step): Calculate the expected value of the log likelihood function, with respect to the conditional distribution of Y given \mathbf{X} under the current estimate of the parameters \boldsymbol\theta^{(t)}:

       A7DFNWMY)KAI]T5)_OMKRUD

Maximization step (M-step): Find the parameter that maximizes this quantity:
\boldsymbol\theta^{(t+1)} = \underset{\boldsymbol\theta} \operatorname{arg\,max} \ Q(\boldsymbol\theta|\boldsymbol\theta^{(t)}) \,

Note that in typical models to which EM is applied:

  1. The observed data points \mathbf{X} may be discrete (taking one of a fixed number of values, or taking values that must be integers) or continuous (taking a continuous range of real numbers, possibly infinite). There may in fact be a vector of observations associated with each data point.
  2. The missing values (aka latent variables) Y are discrete, drawn from a fixed number of values, and there is one latent variable per observed data point.
  3. The parameters are continuous, and are of two kinds: Parameters that are associated with all data points, and parameters associated with a particular value of a latent variable (i.e. associated with all data points whose corresponding latent variable has a particular value).

However, it is possible to apply EM to other sorts of models.

The motivation is as follows. If we know the value of the parameters \boldsymbol\theta, we can usually find the value of the latent variables Y by maximizing the log-likelihood over all possible values of Y, either simply by iterating over Y or through an algorithm such as the Viterbi algorithm for hidden Markov models. Conversely, if we know the value of the latent variables Y, we can find an estimate of the parameters \boldsymbol\theta fairly easily, typically by simply grouping the observed data points according to the value of the associated latent variable and averaging the values, or some function of the values, of the points in each group. This suggests an iterative algorithm, in the case where both \boldsymbol\theta and Y are unknown:

  1. First, initialize the parameters \boldsymbol\theta to some random values.
  2. Compute the best value for Y given these parameter values.
  3. Then, use the just-computed values of Y to compute a better estimate for the parameters \boldsymbol\theta. Parameters associated with a particular value of Y will use only those data points whose associated latent variable has that value.
  4. Finally, iterate until convergence.

The algorithm as just described will in fact work, and is commonly called hard EM. The K-means algorithm is an example of this class of algorithms.

However, we can do somewhat better by, rather than making a hard choice for Y given the current parameter values and averaging only over the set of data points associated with a particular value of Y, instead determining the probability of each possible value of Y for each data point, and then using the probabilities associated with a particular value of Y to compute a weighted average over the entire set of data points. The resulting algorithm is commonly called soft EM, and is the type of algorithm normally associated with EM. The counts used to compute these weighted averages are called soft counts (as opposed to the hard counts used in a hard-EM-type algorithm such as K-means). The probabilities computed for Y areposterior probabilities and are what is computed in the E-step. The soft counts used to compute new parameter values are what is computed in the M-step.

总结:

EM is frequently used for data clustering in machine learning and computer vision.

EM会收敛到局部极致,但不能保证收敛到全局最优。

EM对初值比较敏感,通常需要一个好的,快速的初始化过程。

 

这是我的Machine Learning课程,先总结到这里, 下面的工作是做一个GM_EM的总结,多维高斯密度估计!

posted on 2010-10-20 14:44 Sosi 阅读(2485) 评论(0)  编辑 收藏 引用 所属分类: Courses


只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   博问   Chat2DB   管理


统计系统