Guaranteed Rank Minimization via Singular Value Projection: Supplementary Material
نویسندگان
چکیده
We first give a proof of Lemma 2.1 which bounds the error of the (t + 1)-st iterate (ψ(X t+1)) in terms of the error incurred by the t-th iterate and the optimal solution. Proof of Lemma 2.1 Recall that ψ(X) = 1 2 ∥A(X) − b∥ 2 2. Since ψ(·) is a quadratic function, we have ψ(X t+1) − ψ(X t) = ⟨∇ψ(X t), X t+1 − X t ⟩ + 1 2 ∥A(X t+1 − X t)∥ 2 2 ≤ ⟨A T (A(X t) − b), X t+1 − X t ⟩ + 1 2 · (1 + δ 2k) · ∥X t+1 − X t ∥ 2 F , (0.1) where the inequality follows from RIP applied to the matrix X t+1 − X t of rank at most 2k. Let Y t+1 = X t − 1 1+δ 2k A T (A(X t) − b) and f t (X) = ⟨A T (A(X t) − b), X − X t ⟩ + 1 2 · (1 + δ 2k) · ∥X − X t ∥ 2 F. Now, f t (X) = 1 2 (1 + δ 2k) [ ∥X − X t ∥ 2 F + 2 ⟨ A T (A(X t) − b) 1 + δ 2k , X − X t ⟩] = 1 2 (1 + δ 2k)∥X − Y t+1 ∥ 2 F − 1 2(1 + δ 2k) · ∥A T (A(X t) − b)∥ 2 F. Thus, by definition, P k (Y t+1) = X t+1 is the minimizer of f t (X) over all matrices X ∈ C(k) (of rank at most k). In particular, f t (X t+1) ≤ f t (X *) and, ψ(X t+1) − ψ(X t) ≤ f t (X t+1) ≤ f t (X *) = ⟨A T (A(X t) − b), X * − X t ⟩ + 1 2 (1 + δ 2k)∥X * − X t ∥ 2 F ≤ ⟨A T (A(X t) − b), X * − X t ⟩ + 1 2 · 1 + δ 2k 1 − δ 2k ∥A(X * − X t)∥ 2 2 (0.2) = ψ(X *) − ψ(X t) + δ 2k (1 − δ 2k) ∥A(X * − X t)∥ 2 2 , where inequality (0.2) follows from RIP applied to X * − X t. We now prove Theorem …
منابع مشابه
Guaranteed Rank Minimization via Singular Value Projection
Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization with affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy the restricted...
متن کاملFace Recognition Based Rank Reduction SVD Approach
Standard face recognition algorithms that use standard feature extraction techniques always suffer from image performance degradation. Recently, singular value decomposition and low-rank matrix are applied in many applications,including pattern recognition and feature extraction. The main objective of this research is to design an efficient face recognition approach by combining many tech...
متن کاملFast Singular Value Thresholding without Singular Value Decomposition
Singular value thresholding (SVT) is a basic subroutine in many popular numerical schemes for solving nuclear norm minimization that arises from low-rank matrix recovery problems such as matrix completion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the singular values. However, such an approach is time-consuming under some circum...
متن کاملSupplementary Materials for “ Tractable and Scalable Schatten Quasi - Norm Approximations for Rank Minimization ”
In this supplementary material, we give the detailed proofs of some lemmas, properties and theorems, as well as some additional experimental results on synthetic data and four recommendation system data sets. A More Notations R n denotes the n-dimensional Euclidean space, and the set of all m×n matrices with real entries is denoted by R m×n. Given matrices X and Y ∈ R m×n , the inner product is...
متن کاملRandomized Singular Value Projection
Affine rank minimization algorithms typically rely on calculating the gradient of a data error followed by a singular value decomposition at every iteration. Because these two steps are expensive, heuristic approximations are often used to reduce computational burden. To this end, we propose a recovery scheme that merges the two steps with randomized approximations, and as a result, operates on...
متن کامل