A Proximal Point Algorithm for Minimum Divergence Estimators with Application to Mixture Models
نویسندگان
چکیده
Estimators derived from a divergence criterion such as φ−divergences are generally more robust than the maximum likelihood ones. We are interested in particular in the so-called minimum dual φ–divergence estimator (MDφDE), an estimator built using a dual representation of φ–divergences. We present in this paper an iterative proximal point algorithm that permits the calculation of such an estimator. The algorithm contains by construction the well-known Expectation Maximization (EM) algorithm. Our work is based on the paper of Tseng on the likelihood function. We provide some convergence properties by adapting the ideas of Tseng. We improve Tseng’s results by relaxing the identifiability condition on the proximal term, a condition which is not verified for most mixture models and is hard to be verified for “non mixture” ones. Convergence of the EM algorithm in a two-component Gaussian mixture is discussed in the spirit of our approach. Several experimental results on mixture models are provided to confirm the validity of the approach.
منابع مشابه
A Hybrid Proximal Point Algorithm for Resolvent operator in Banach Spaces
Equilibrium problems have many uses in optimization theory and convex analysis and which is why different methods are presented for solving equilibrium problems in different spaces, such as Hilbert spaces and Banach spaces. The purpose of this paper is to provide a method for obtaining a solution to the equilibrium problem in Banach spaces. In fact, we consider a hybrid proximal point algorithm...
متن کاملLearning mixtures by simplifying kernel density estimators
Gaussian mixture models are a widespread tool for modeling various and complex probability density functions. They can be estimated by various means, often using Expectation-Maximization or Kernel Density Estimation. In addition to these well known algorithms, new and promising stochastic modeling methods include Dirichlet Process mixtures and k-Maximum Likelihood Estimators. Most of the method...
متن کاملDrift Change Point Estimation in the rate and dependence Parameters of Autocorrelated Poisson Count Processes Using MLE Approach: An Application to IP Counts Data
Change point estimation in the area of statistical process control has received considerable attentions in the recent decades because it helps process engineer to identify and remove assignable causes as quickly as possible. On the other hand, improving in measurement systems and data storage, lead to taking observations very close to each other in time and as a result increasing autocorrelatio...
متن کاملEstimation of Rényi Information Divergence via Pruned Minimal Spanning Trees
In this paper we develop robust estimators of the Rényi information divergence (I-divergence) given a reference distribution and a random sample from an unknown distribution. Estimation is performed by constructing a minimal spanning tree (MST) passing through the random sample points and applying a change of measure which flattens the reference distribution. In a mixture model where the refere...
متن کاملPenalized Bregman Divergence Estimation via Coordinate Descent
Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Entropy
دوره 18 شماره
صفحات -
تاریخ انتشار 2016