Berry–Esseen bounds for multivariate nonlinear statistics with applications to M-estimators and stochastic gradient descent algorithms

نویسندگان

چکیده

We establish a Berry–Esseen bound for general multivariate nonlinear statistics by developing new multivariate-type randomized concentration inequality. The is the best possible many known statistics. As applications, bounds M-estimators and averaged stochastic gradient descent algorithms are obtained.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Analysis of Gradient Descent Stochastic Algorithms

This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the ca...

متن کامل

Intrinsic Geometry of Stochastic Gradient Descent Algorithms

We consider the intrinsic geometry of stochastic gradient descent (SG) algorithms. We show how to derive SG algorithms that fully respect an underlying geometry which can be induced by either prior knowledge in the form of a preferential structure or a generative model via the Fisher information metric. We show that using the geometrically motivated update and the “correct” loss function, the i...

متن کامل

Descent Representations and Multivariate Statistics

Combinatorial identities on Weyl groups of types A and B are derived from special bases of the corresponding coinvariant algebras. Using the Garsia-Stanton descent basis of the coinvariant algebra of type A we give a new construction of the Solomon descent representations. An extension of the descent basis to type B, using new multivariate statistics on the group, yields a refinement of the des...

متن کامل

Stochastic Gradient Descent with GPGPU

We show how to optimize a Support Vector Machine and a predictor for Collaborative Filtering with Stochastic Gradient Descent on the GPU, achieving 1.66 to 6-times accelerations compared to a CPUbased implementation. The reference implementations are the Support Vector Machine by Bottou and the BRISMF predictor from the Netflix Prices winning team. Our main idea is to create a hash function of ...

متن کامل

Generalization Bounds for Randomized Learning with Application to Stochastic Gradient Descent

Randomized algorithms are central to modern machine learning. In the presence of massive datasets, researchers often turn to stochastic optimization to solve learning problems. Of particular interest is stochastic gradient descent (SGD), a first-order method that approximates the learning objective and gradient by a random point estimate. A classical question in learning theory is, if a randomi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Bernoulli

سال: 2022

ISSN: ['1573-9759', '1350-7265']

DOI: https://doi.org/10.3150/21-bej1336