Penalized empirical risk minimization over Besov spaces

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Suboptimality of Penalized Empirical Risk Minimization in Classification

Let F be a set of M classification procedures with values in [−1, 1]. Given a loss function, we want to construct a procedure which mimics at the best possible rate the best procedure in F . This fastest rate is called optimal rate of aggregation. Considering a continuous scale of loss functions with various types of convexity, we prove that optimal rates of aggregation can be either ((logM)/n)...

متن کامل

Stability Properties of Empirical Risk Minimization over Donsker Classes

2 ) converges to zero in probability. Hence, even in the case of multiple minimizers of expected error, as n increases it becomes less and less likely that adding a sample (or a number of samples) to the training set will result in a large jump to a new hypothesis. Moreover, under some assumptions on the entropy of the class, along with an assumption of Komlos-Major-Tusnady type, we derive a po...

متن کامل

Some Properties of Empirical Risk Minimization Over Donsker Classes

We study properties of algorithms which minimize (or almost-minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) t...

متن کامل

On Adaptivity Of BlockShrink Wavelet Estimator Over Besov Spaces

Cai(1996b) proposed a wavelet method, BlockShrink, for estimating regression functions of unknown smoothness from noisy data by thresholding empirical wavelet co-eecients in groups rather than individually. The BlockShrink utilizes the information about neighboring wavelet coeecients and thus increases the estimation accuracy of the wavelet coeecients. In the present paper, we ooer insights int...

متن کامل

mixup: BEYOND EMPIRICAL RISK MINIMIZATION

Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple lin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Electronic Journal of Statistics

سال: 2009

ISSN: 1935-7524

DOI: 10.1214/08-ejs316