نتایج جستجو برای: matrix majorization
تعداد نتایج: 365508 فیلتر نتایج به سال:
For vectors $X, Yin mathbb{R}^{n}$, we say $X$ is left matrix majorized by $Y$ and write $X prec_{ell} Y$ if for some row stochastic matrix $R, ~X=RY.$ Also, we write $Xsim_{ell}Y,$ when $Xprec_{ell}Yprec_{ell}X.$ A linear operator $Tcolon mathbb{R}^{p}to mathbb{R}^{n}$ is said to be a linear preserver of a given relation $prec$ if $Xprec Y$ on $mathbb{R}^{p}$ implies that $TXprec TY$ on $mathb...
In this paper, we introduce a novel iterative algorithm for the problem of phase-retrieval where measurements consist only magnitude linear function unknown signal, and noise in follow Poisson distribution. The proposed is based on principle majorization-minimization (MM); however, application MM here very distinct from way has been usually used to solve optimization problems literature. More p...
Principal Component Analysis is a method for reducing the dimensionality of datasets while also limiting information loss. It accomplishes this by producing uncorrelated variables that maximize variance one after other. The accepted criterion evaluating Component’s (PC) performance λ_j/tr(S) where tr(S) denotes trace covariance matrix S. standard procedure to determine how many PCs should be ma...
We study matrix inequalities involving partial traces for positive semidefinite block matrices. First of all, we present a new method to prove celebrated result Choi [Linear Algebra Appl. 516 (2017)]. Our also allows us generalization another Multilinear 66 (2018)]. Furthermore, shall give an improvement on recent Li, Liu and Huang [Operators Matrices 15 (2021)]. In addition, include with some ...
Majorization-minimization algorithms consist of iteratively minimizing a majorizing surrogate of an objective function. Because of its simplicity and its wide applicability, this principle has been very popular in statistics and in signal processing. In this paper, we intend to make this principle scalable. We introduce a stochastic majorization-minimization scheme which is able to deal with la...
The glmnet package by [1] is an extremely fast implementation of the standard coordinate descent algorithm for solving l1 penalized learning problems. In this paper, we consider a family of coordinate majorization descent algorithms for solving the l1 penalized learning problems by replacing each coordinate descent step with a coordinate-wise majorization descent operation. Numerical experiment...
To minimize the primal support vector machine (SVM) problem, we propose to use iterative majorization. To do so, we propose to use iterative majorization. To allow for nonlinearity of the predictors, we use (non)monotone spline transformations. An advantage over the usual kernel approach in the dual problem is that the variables can be easily interpreted. We illustrate this with an example from...
Abstract Multivariate regression techniques are commonly applied to explore the associations between large numbers of outcomes and predictors. In real-world applications, often mixed types, including continuous measurements, binary indicators, counts, observations may also be incomplete. Building upon recent advances in mixed-outcome modeling sparse matrix factorization, generalized co-sparse f...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید