نتایج جستجو برای: matrix norm
تعداد نتایج: 402509 فیلتر نتایج به سال:
Here, we will provide a spectral norm bound for the error of the approximation constructed by the BasicMatrixMultiplication algorithm. Recall that, given as input a m × n matrix A and an n× p matrix B, this algorithm randomly samples c columns of A and the corresponding rows of B to construct a m× c matrix C and a c× p matrix R such that CR ≈ AB, in the sense that some matrix norm ||AB −CR|| is...
Nomenclature Hadamard product k k2, k kF spectral norm, Frobenius norm 0l m; 0l l m zero matrix, 0l l Il; 1l m l l identity matrix, l m ones matrix
The k-support norm has successfully been applied to sparse vector prediction problems. We observe that it belongs to a wider class of norms, which we call the box-norms. Within this framework we derive an efficient algorithm to compute the proximity operator of the squared norm, improving upon the original method for the k-support norm. We extend the norms from the vector to the matrix setting ...
Recently, l2,1 matrix norm has been widely applied to many areas such as computer vision, pattern recognition, biological study and etc. As an extension of l1 vector norm, the mixed l2,1 matrix norm is often used to find jointly sparse solutions. Moreover, an efficient iterative algorithm has been designed to solve l2,1-norm involved minimizations. Actually, computational studies have showed th...
In this talk we deal with a more precise estimates for the matrix versions of Young, Heinz, and Hölder inequalities. First we give an improvement of the matrix Heinz inequality for the case of the Hilbert-Schmidt norm. Then, we refine matrix Young-type inequalities for the case of Hilbert-Schmidt norm, which hold under certain assumptions on positive semidefinite matrices appearing therein. Fin...
In this paper, the fuzzy dual matrix system as AX + B = CX + D in which A, B, C, D, X are LR fuzzy matrices is studied. At first we solve 1-cut system in order to find the core of LR fuzzy solution; then to obtain the spreads of the LR fuzzy solution, we discuss in several cases. The spreads are obtained by using multiplication, quasi norm and minimization problem with a special objective funct...
In this paper, we consider the l0 norm minimization problem with linear equation and nonnegativity constraints. By introducing the concept of generalized Z-matrix for a rectangular matrix, we show that this l0 norm minimization with such a kind of measurement matrices and nonnegative observations can be exactly solved via the corresponding lp (0 < p ≤ 1) norm minimization. Moreover, the lower b...
Dedicated to our friends Beresford and Velvel on the occasion of their sixtieth birthdays. ABSTRACT We show that a certain matrix norm ratio studied by Parlett has a supremum that is O(p n) when the chosen norm is the Frobenius norm, while it is O(log n) for the 2-norm. This ratio arises in Parlett's analysis of the Cholesky decomposition of an n by n matrix.
We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly, but that a properly weighted version of the trace-norm regularizer works well with non-uniform sampling. We show that the weighted trace-norm regularization indeed yields significant gains on the highly non-uniformly sampled Netflix dataset.
In this letter, we analyze a two-stage cluster-then-l(1)-optimization approach for sparse representation of a data matrix, which is also a promising approach for blind source separation (BSS) in which fewer sensors than sources are present. First, sparse representation (factorization) of a data matrix is discussed. For a given overcomplete basis matrix, the corresponding sparse solution (coeffi...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید