نتایج جستجو برای: empirical matrix

تعداد نتایج: 563469  

2010
Ameet Talwalkar Afshin Rostamizadeh

The Nyström method is an efficient technique to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence and connect matrix coherence to the pe...

2002
Frank De Jong Joost Driessen Antoon Pelsser

Cap and swaption prices contain information on interest rate volatilities and correlations. In this paper, we examine whether this information in cap and swaption prices is consistent with realized movements of the interest rate term structure. To extract an option-implied interest rate covariance matrix from cap and swaption prices, we use Libor market models or discrete-tenor string models as...

Journal: :Sociological methods & research 2012
Ke-Hai Yuan Fan Yang-Wallentin Peter M Bentler

Normal-distribution-based maximum likelihood (ML) and multiple imputation (MI) are the two major procedures for missing data analysis. This article compares the two procedures with respects to bias and efficiency of parameter estimates. It also compares formula-based standard errors (SEs) for each procedure against the corresponding empirical SEs. The results indicate that parameter estimates b...

2014
Hsiang-Fu Yu Prateek Jain Purushottam Kar Inderjit S. Dhillon

The multi-label classification problem has generated significant interest in recent years. However, existing approaches do not adequately address two key challenges: (a) scaling up to problems with a large number (say millions) of labels, and (b) handling data with missing labels. In this paper, we directly address both these problems by studying the multi-label problem in a generic empirical r...

2017
Jean Honorio

Recall that in Theorem 2.1, we analyzed empirical risk minimization with a finite hypothesis class F , i.e., |F| < +∞. Here, we will prove results for possibly infinite hypothesis classes. Although the PAC-Bayes framework is far more general, we will concentrate of the prediction problem as before, i.e., (∀f ∈ F) f : X → Y. Also, note that Theorem 2.1 could have been stated in a more general fa...

Journal: :J. Inf. Sci. Eng. 2015
Yitian Xu

Twin support vector regression (TSVR), as an effective regression machine, solves a pair of smaller-sized quadratic programming problems (QPPs) rather than a single large one as in the classical support vector regression (SVR), which makes the learning speed of TSVR approximately 4 times faster than that of the SVR. However, the empirical risk minimization principle is implemented in TSVR, whic...

Journal: :CoRR 2017
Netanel Raviv Itzhak Tamo Rashish Tandon Alexandros G. Dimakis

Gradient Descent, and its variants, are a popular method for solving empirical risk minimization problems in machine learning. However, if the size of the training set is large, a computational bottleneck is the computation of the gradient, and hence, it is common to distribute the training set among worker nodes. Doing this in a synchronous fashion faces yet another challenge of stragglers (i....

Journal: :CoRR 2016
Shan You Chang Xu Yunhe Wang Chao Xu Dacheng Tao

It is challenging to handle a large volume of labels in multi-label learning. However, existing approaches explicitly or implicitly assume that all the labels in the learning process are given, which could be easily violated in changing environments. In this paper, we define and study streaming label learning (SLL), i.e.labels are arrived on the fly, to model newly arrived labels with the help ...

2015
Roy Frostig Rong Ge Sham M. Kakade Aaron Sidford

We develop a family of accelerated stochastic algorithms that optimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework, based on the classical proximal point algorithm, useful for accelerating ...

2017
Jialei Wang Lin Xiao

We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems, and thus are well suitable for primal-dual first-order algorithms. However, primal-dual algorithms often require explicit strongly convex regularization in order to obtain fast linear convergence, and the required dual proximal mappi...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید