Lecture 10 : Expectation - Maximization Algorithm ( LaTeX prepared by Shaobo Fang

نویسندگان

  • Stanley H. Chan
  • Shaobo Fang
چکیده

Consider a set of data points with their classes labeled, and assume that each class is a Gaussian as shown in Figure 1(a). Given this set of data points, finding the means of two Gaussian can be done easily by estimating the sample mean, as the class labels are known. Now imagine that the classes are not labeled as shown in Figure 1(b). How should we determine the mean for each of the classes then? In order to solve this problem, we could use an iterative approach: first make a guess of the class label for each data point, then compute the means and update the guess of the class labels again. We repeat until the means converge. The problem of estimating parameters in the absence of labels is known as unsupervised learning. There are many unsupervised learning methods. We will focus on the Expectation Maximization (EM) algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

4.1 Overview

In this lecture, we will address problems 3 and 4. First, continuing from the previous lecture, we will view BaumWelch Re-estimation as an instance of the Expectation-Maximization (EM) algorithm and prove why the EM algorithm maximizes data likelihood. Then, we will proceed to discuss discriminative training under the maximum mutual information estimation (MMIE) framework. Specifically, we will...

متن کامل

Quantitative SPECT and planar 32P bremsstrahlung imaging for dosimetry purpose –An experimental phantom study

Background: In this study, Quantitative 32P bremsstrahlung planar and SPECT imaging and consequent dose assessment were carried out as a comprehensive phantom study to define an appropriate method for accurate Dosimetry in clinical practice. Materials and Methods: CT, planar and SPECT bremsstrahlung images of Jaszczak phantom containing a known activity of 32P were acquired. In addition, Phanto...

متن کامل

Lecture 6 The EM Algorithm , Mixture Models , and Motif

In a previous class, we discussed an algorithm for learning a probabilistic matrix model which describes a fixed-length motif in a set of sequences S : : : S over an alphabet A. This algorithm is one of a class of methods collectively known as expectation maximization, or EM. We will describe the general EM algorithm, then derive the motif-finding algorithm by applying EM to learn a specific pr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015