نتایج جستجو برای: row stochastic matrix

تعداد نتایج: 497993  

This article proposes an optimal method for approximate answer of stochastic Ito-Voltrra integral equations, via rationalized Haar functions and their stochastic operational matrix of integration. Stochastic Ito-voltreea integral equation is reduced to a system of linear equations. This scheme is applied for some examples. The results show the efficiency and accuracy of the method.

2017
Stefano Cipolla Carmine Di Fiore Francesco Tudisco

Let S be a column stochastic matrix with at least one full row. Then S describes a Pagerank-like random walk since the computation of the Perron vector x of S can be tackled by solving a suitable M-matrix linear system Mx = y, where M = I − τA, A is a column stochastic matrix and τ is a positive coefficient smaller than one. The Pagerank centrality index on graphs is a relevant example where th...

Journal: :The American Mathematical Monthly 2014
Robert A. Beezer

Associated with any matrix, there are four fundamental subspaces: the column space, row space, (right) null space, and left null space. We describe a single computation that makes readily apparent bases for all four of these subspaces. Proofs of these results rely only on matrix algebra, not properties of dimension. A corollary is the equality of column rank and row rank. Bringing a matrix to r...

Journal: :IJHPCA 2016
Dahai Guo William Gropp Luke N. Olson

In this paper, we present a new sparse matrix data format that leads to improved memory coalescing and more efficient sparse matrix-vector multiplication (SpMV) for a wide range of problems on high throughput architectures such as a graphics processing unit (GPU). The sparse matrix structure is constructed by sorting the rows based on the row length (defined as the number of non-zero elements i...

2017
David Durfee Kevin A. Lai Saurabh Sawlani

We consider the `1 minimization problem minx ‖Ax − b‖1 in the overconstrained case, commonly known as the Least Absolute Deviations problem, where there are far more constraints than variables. More specifically, we have A ∈ Rn×d for n d. Many important combinatorial problems, such as minimum cut and shortest path, can be formulated as `1 regression problems [CMMP13]. We follow the general para...

2017
David Durfee Kevin A. Lai Saurabh Sawlani

We consider the l1 minimization problem minx ‖Ax − b‖1 in the overconstrained case, commonly known as the Least Absolute Deviations problem, where there are far more constraints than variables. More specifically, we have A ∈ R for n ≫ d. Many important combinatorial problems, such as minimum cut and shortest path, can be formulated as l1 regression problems [CMMP13]. We follow the general parad...

2016
Xiaonan Hu

A letter matrix is an n-by-n matrix whose entries are n symbols, each appearing n times. The row (column) distribution of a letter matrix is an n-by-n nonnegative integer matrix that tells how many of each letter are in each row (column). A row distribution R and a column distribution C are compatible if there exits a letter matrix A whose row distribution is R and whose column distribution is ...

ژورنال: پژوهش های ریاضی 2022

The relation between the analytic hierarchy process (AHP) and data envelopment analysis (DEA) is a topic of interest to researchers in this branch of applied mathematics. In this paper, we propose a linear programming model that generates a weight (priority) vector from a pairwise comparison matrix. In this method, which is referred to as the E-DEAHP method, we consider each row of the pairwise...

2005
David J. Grabiner Warren D. Smith Sushanta Srivastava

The ith row of A represents a ±1-valued “square wave” function of j with halfperiod i. Thus xA, where x is a (row) N -vector, gives a picture of the waveform that is a linear combination of square waves with weights in x. Similarly yA finds weights so that a given waveform y is a weighted linear combination of these square waves. This matrix has a surprisingly simple inverse. More generally, we...

2012
A. BARRERAS

An efficient method for the computation to high relative accuracy of the LDU decomposition of an n × n row diagonally dominant M–matrix is presented, assuming that the off–diagonal entries and row sums are given. This method costs an additional O(n) elementary operations over the cost of Gaussian elimination, and leads to a lower triangular, column diagonally dominant matrix and an upper triang...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید