Randomized Algorithms 2017 A – Lecture 7 Dimension Reduction in l 2 ∗

نویسنده

  • Robert Krauthgamer
چکیده

Using main lemma: Let L = G/ √ k, and recall we defined yi = Lxi. For every i < j, apply the lemma to xi − xj , then with probability at least 1− 2/n3, ∥yi − yj∥ = ∥L(xi − xj)∥ = ∥G(xi − xj)∥/ √ k ∈ (1± ε)∥xi − xj∥. ∗These notes summarize the material covered in class, usually skipping proofs, details, examples and so forth, and possibly adding some remarks, or pointers. The exercises are for self-practice and need not be handed in. In the interest of brevity, most references and credits were omitted.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data Sparse Matrix Computation - Lecture 11

2 Randomized algorithms 4 2.1 Randomized low-rank factorization . . . . . . . . . . . . . . . . . 4 2.2 How to find such a Q . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 How to construct Q with randomness . . . . . . . . . . . . . . . . 5 2.4 An adaptive randomized range finder algorithm . . . . . . . . . . 6 2.5 Example of implementation of the adaptive range approximation method . ...

متن کامل

Valiant Metric Embeddings , Dimension Reduction

In the previous lecture notes, we saw that any metric (X, d) with |X| = n can be embedded into R 2 n) under any the `1 metric (actually, the same embedding works for any `p metic), with distortion O(log n). Here, we describe an extremely useful approach for reducing the dimensionality of a Euclidean (`2) metric, while incurring very little distortion. Such dimension reduction is useful for a nu...

متن کامل

On the Suboptimality of Proximal Gradient Descent for $\ell^{0}$ Sparse Approximation

We study the proximal gradient descent (PGD) method for l sparse approximation problem as well as its accelerated optimization with randomized algorithms in this paper. We first offer theoretical analysis of PGD showing the bounded gap between the sub-optimal solution by PGD and the globally optimal solution for the l sparse approximation problem under conditions weaker than Restricted Isometry...

متن کامل

Developing a Filter-Wrapper Feature Selection Method and its Application in Dimension Reduction of Gen Expression

Nowadays, increasing the volume of data and the number of attributes in the dataset has reduced the accuracy of the learning algorithm and the computational complexity. A dimensionality reduction method is a feature selection method, which is done through filtering and wrapping. The wrapper methods are more accurate than filter ones but perform faster and have a less computational burden. With ...

متن کامل

Adaptive Randomized Dimension Reduction on Massive Data

The scalability of statistical estimators is of increasing importance in modern applications. One approach to implementing scalable algorithms is to compress data into a low dimensional latent space using dimension reduction methods. In this paper we develop an approach for dimension reduction that exploits the assumption of low rank structure in high dimensional data to gain both computational...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016