نتایج جستجو برای: loss minimization

تعداد نتایج: 475131  

2013
Nagarajan Natarajan Inderjit S. Dhillon Pradeep Ravikumar Ambuj Tewari

In this paper, we theoretically study the problem of binary classification in the presence of random classification noise — the learner, instead of seeing the true labels, sees labels that have independently been flipped with some small probability. Moreover, random label noise is class-conditional — the flip probability depends on the class. We provide two approaches to suitably modify any giv...

2008

We extend the well-known BFGS quasiNewton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting sub(L)BFGS algorithm to L2-re...

2008
Jin Yu

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

2015
Richard Nock Giorgio Patrini Arik Friedman

The minimization of the logistic loss is a popular approach to batch supervised learning. Our paper starts from the surprising observation that, when fitting linear (or kernelized) classifiers, the minimization of the logistic loss is equivalent to the minimization of an exponential rado-loss computed (i) over transformed data that we call Rademacher observations (rados), and (ii) over the same...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه شهید باهنر کرمان - دانشکده کشاورزی 1394

به منظور بررسی تاثیر پروبیوتیک و سطوح مختلف پودر صمغ آنغوزه در مقایسه با آنتی بیوتیک (آویلامایسین) بر عملکرد رشد، وزن نسبی اندام های داخلی، میکروفلور روده، مورفولوژی پرزهای روده، کیفیت گوشت و تیترآنتی بادی بر علیه گلبول قرمزگوسفندی در جوجه های گوشتی، آزمایشی در قالب طرح کاملا تصادفی با 6 تیمار به اجرا درآمد. تیمارهای آزمایشی شامل: جیره پایه بدون افزودنی، جیره پایه حاوی 100 میلی گرم در کیلو گرم ...

2014
Maruthi Prasanna

In the present deregulated environment, optimal placement of Distributed Generation (DG) and shunt capacitor in the distribution network plays a vital role in distribution system planning. In this paper, an analytical approach for optimal placement of combined DG and Capacitor units are determined with the objective of power loss reduction and voltage profile improvement. Firstly, the DG unit i...

2007
Kwangmoo Koh Seung-Jean Kim Stephen Boyd

Convex loss minimization with l1 regularization has been proposed as a promising method for feature selection in classification (e.g., l1-regularized logistic regression) and regression (e.g., l1-regularized least squares). In this paper we describe an efficient interior-point method for solving large-scale l1-regularized convex loss minimization problems that uses a preconditioned conjugate gr...

Journal: :international journal of smart electrical engineering 2012
s.a hashemi zadeh o zeidabadi nejad s hasani a.a gharaveisi gh shahgholian

distributed generations (dgs) are utilized to supply the active and reactive power in the transmission and distribution systems. these types of power sources have many benefits such as power quality enhancement, voltage deviation reduction, power loss reduction, load shedding reduction, reliability improvement, etc. in order to reach the above benefits, the optimal placement and sizing of dg is...

2015
Hong Wang Wei Xing Kaiser Asif Brian D. Ziebart

Multivariate loss functions are used to assess performance in many modern prediction tasks, including information retrieval and ranking applications. Convex approximations are typically optimized in their place to avoid NP-hard empirical risk minimization problems. We propose to approximate the training data instead of the loss function by posing multivariate prediction as an adversarial game b...

2010
Hamed Masnadi-Shirazi Nuno Vasconcelos

A new procedure for learning cost-sensitive SVM classifiers is proposed. The SVM hinge loss is extended to the cost sensitive setting, and the cost-sensitive SVM is derived as the minimizer of the associated risk. The extension of the hinge loss draws on recent connections between risk minimization and probability elicitation. These connections are generalized to costsensitive classification, i...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید