Relative Error Tensor Low Rank Approximation

نویسندگان

  • Zhao Song
  • David P. Woodruff
  • Peilin Zhong
چکیده

We consider relative error low rank approximation of tensors with respect to the Frobenius norm. Namely, given an order-q tensor A ∈ R ∏q i=1 ni , output a rank-k tensor B for which ‖A − B‖F ≤ (1 + ) OPT, where OPT = infrank-k A′ ‖A − A‖F . Despite much success on obtaining relative error low rank approximations for matrices, no such results were known for tensors. One structural issue is that there may be no rank-k tensor Ak achieving the above infinum. Another, computational issue, is that an efficient relative error low rank approximation algorithm for tensors would allow one to compute the rank of a tensor, which is NP-hard. We bypass these two issues via (1) bicriteria and (2) parameterized complexity solutions: 1. We give an algorithm which outputs a rank k′ = O((k/ )q−1) tensor B for which ‖A − B‖F ≤ (1+ ) OPT in nnz(A)+n ·poly(k/ ) time in the real RAM model, whenever either Ak exists or OPT > 0. Here nnz(A) denotes the number of non-zero entries in A. If both Ak does not exist and OPT = 0, then B instead satisfies ‖A − B‖F < γ, where γ is any positive, arbitrarily small function of n. 2. We give an algorithm for any δ > 0 which outputs a rank k tensor B for which ‖A−B‖F ≤ (1+ ) OPT and runs in (nnz(A)+n poly(k/ )+exp(k/ )) ·nδ time in the unit cost RAM model, whenever OPT > 2−O(n ) and there is a rank-k tensor B = ∑k i=1 ui ⊗ vi ⊗ wi for which ‖A − B‖F ≤ (1 + /2) OPT and ‖ui‖2, ‖vi‖2, ‖wi‖2 ≤ 2 δ). If OPT ≤ 2−Ω(nδ), then B instead satisfies ‖A−B‖F ≤ 2−Ω(n δ). Our first result is polynomial time, and in fact input sparsity time, in n, k, and 1/ , for any k ≥ 1 and any 0 < < 1, while our second result is fixed parameter tractable in k and 1/ . For outputting a rank-k tensor, or even a bicriteria solution with rank-Ck for a certain constant C > 1, we show a 2 1−o(1)) time lower bound under the Exponential Time Hypothesis. Our results are based on an “iterative existential argument”, and give the first relative error low rank approximations for tensors for a large number of error measures for which nothing was known. In particular, we give the first relative error approximation algorithms on tensors for: column row and tube subset selection, entrywise `p-low rank approximation for 1 ≤ p < 2, low rank approximation with respect to sum of Euclidean norms of faces or tubes, weighted low rank approximation, and low rank approximation in distributed and streaming models. We also obtain several new results for matrices, such as nnz(A)-time CUR decompositions, improving the previous nnz(A) log n-time CUR decompositions, which may be of independent interest. ∗Work done while visiting IBM Almaden, and supported in part by UTCS TAship (CS361 Spring 17 Introduction to Computer Security). †Supported in part by Simons Foundation, and NSF CCF-1617955. ar X iv :1 70 4. 08 24 6v 1 [ cs .D S] 2 6 A pr 2 01 7

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Existence and Computation of a Low Kronecker-Rank Approximant to the Solution of a Tensor System with Tensor Right-Hand Side

In this paper we construct an approximation to the solution x of a linear system of equations Ax = b of tensor product structure as it typically arises for finite element and finite difference discretisations of partial differential operators on tensor grids. For a right-hand side b of tensor product structure we can prove that the solution x can be approximated by a sum of O(log(ε)2) tensor pr...

متن کامل

Beyond Low Rank: A Data-Adaptive Tensor Completion Method

Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, however, this approach is confronted with two challenging problems, namely (1) tensor rank determination; (2) handling real tensor data which only approximately fulfils the low-rank requirement. To address these two issues, we develop a data-adaptive tensor completion model which explici...

متن کامل

Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats

In this article we describe an efficient approximation of the stochastic Galerkin matrix which stems from a stationary diffusion equation. The uncertain permeability coefficient is assumed to be a log-normal random field with given covariance and mean functions. The approximation is done in the canonical tensor format and then compared numerically with the tensor train and hierarchical tensor f...

متن کامل

A Constructive Algorithm for Decomposing a Tensor into a Finite Sum of Orthonormal Rank-1 Terms

Abstract. We propose a novel and constructive algorithm that decomposes an arbitrary tensor into a finite sum of orthonormal rank-1 outer factors. The algorithm, named TTr1SVD, works by converting the tensor into a rank-1 tensor train (TT) series via singular value decomposition (SVD). TTr1SVD naturally generalizes the SVD to the tensor regime and delivers elegant notions of tensor rank and err...

متن کامل

Discretized Dynamical Low-Rank Approximation in the Presence of Small Singular Values

Low-rank approximations to large time-dependent matrices and tensors are the subject of this paper. These matrices and tensors are either given explicitly or are the unknown solutions of matrix and tensor differential equations. Based on splitting the orthogonal projection onto the tangent space of the low-rank manifold, novel time integrators for obtaining approximations by low-rank matrices a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1704.08246  شماره 

صفحات  -

تاریخ انتشار 2017