Learning from uniformly ergodic Markov chains

نویسندگان

  • Bin Zou
  • Hai Zhang
  • Zongben Xu
چکیده

Evaluation for generalization performance of learning algorithms has been themain thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually establishedbased on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERMalgorithmwith u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms. © 2009 Elsevier Inc. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hoe ding's Inequality for Uniformly Ergodic Markov Chains

We provide a generalization of Hoeeding's inequality to partial sums that are derived from a uniformly ergodic Markov chain. Our exponential inequality on the deviation of these sums from their expectation is particularly useful in situations where we require uniform control on the constants appearing in the bound.

متن کامل

A regeneration proof of the central limit theorem for uniformly ergodic Markov chains

E h(x)π(dx). Ibragimov and Linnik (1971) proved that if (Xn) is geometrically ergodic, then a central limit theorem (CLT) holds for h whenever π(|h|) < ∞, δ > 0. Cogburn (1972) proved that if a Markov chain is uniformly ergodic, with π(h) < ∞ then a CLT holds for h. The first result was re-proved in Roberts and Rosenthal (2004) using a regeneration approach; thus removing many of the technicali...

متن کامل

Mixing Times for Uniformly Ergodic Markov Chains

Consider the class of discrete time, general state space Markov chains which satist)' a "'uniform ergodicity under sampling" condition. There are many ways to quantify the notion of "mixing time", i.e., time to approach stationarity from a worst initial state. We prove results asserting equivalence (up to universal constants) of different quantifications of mixing time. This work combines three...

متن کامل

Ergodic BSDEs Driven by Markov Chains

We consider ergodic backward stochastic differential equations, in a setting where noise is generated by a countable state uniformly ergodic Markov chain. We show that for Lipschitz drivers such that a comparison theorem holds, these equations admit unique solutions. To obtain this result, we show by coupling and splitting techniques that uniform ergodicity estimates of Markov chains are robust...

متن کامل

Estimation of the Entropy Rate of ErgodicMarkov Chains

In this paper an approximation for entropy rate of an ergodic Markov chain via sample path simulation is calculated. Although there is an explicit form of the entropy rate here, the exact computational method is laborious to apply. It is demonstrated that the estimated entropy rate of Markov chain via sample path not only converges to the correct entropy rate but also does it exponential...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Complexity

دوره 25  شماره 

صفحات  -

تاریخ انتشار 2009