نتایج جستجو برای: loss minimization

تعداد نتایج: 475131  

2011
Wojciech Kotlowski Krzysztof Dembczynski Eyke Hüllermeier

Minimization of the rank loss or, equivalently, maximization of the AUC in bipartite ranking calls for minimizing the number of disagreements between pairs of instances. Since the complexity of this problem is inherently quadratic in the number of training examples, it is tempting to ask how much is actually lost by minimizing a simple univariate loss function, as done by standard classificatio...

2017
Di Wang Minwei Ye Jinhui Xu

In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensio...

Journal: :CoRR 2015
Dominik Csiba Peter Richtárik

In this work we develop a new algorithm for regularized empirical risk minimization. Our method extends recent techniques of Shalev-Shwartz [02/2015], which enable a dual-free analysis of SDCA, to arbitrary mini-batching schemes. Moreover, our method is able to better utilize the information in the data defining the ERM problem. For convex loss functions, our complexity results match those of Q...

Journal: :Journal of Machine Learning Research 2010
Ming Yuan Marten H. Wegkamp

In this paper, we investigate the problem of binary classification with a reject option in which one can withhold the decision of classifying an observation at a cost lower than that of misclassification. Since the natural loss function is non-convex so that empirical risk minimization easily becomes infeasible, the paper proposes minimizing convex risks based on surrogate convex loss functions...

1998
Michiel C. van Wezel Walter A. Kosters Joost N. Kok

In this we paper study the problem of combining the outputs of the members of an ensemble of neural networks. We review the commonly used methods and thoroughly derive a cost function from a maximum likelihood perspective, that can be minimized in order to obtain maximum likelihood weights. The solution is shown to be closely related to a well known statistical method. The various combination m...

Journal: :Bulletin of Bryansk state technical university 2015

Journal: :TELKOMNIKA (Telecommunication Computing Electronics and Control) 2018

Journal: :Zisin (Journal of the Seismological Society of Japan. 2nd ser.) 1971

Journal: :IEEE Systems Journal 2021

DC microgrids are growing in popularity due to their advantages terms of simplicity and energy efficiency while connecting dc sources loads. In traditional hierarchical schemes, optimization control implemented at different time scales. The loose integration lowers its makes it hard achieve real-time optimization. Even a slight disturbance can result deviations bus voltages output currents from...

2007
Guillaume Lecué

Let F be a set of M classification procedures with values in [−1, 1]. Given a loss function, we want to construct a procedure which mimics at the best possible rate the best procedure in F . This fastest rate is called optimal rate of aggregation. Considering a continuous scale of loss functions with various types of convexity, we prove that optimal rates of aggregation can be either ((logM)/n)...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید