نتایج جستجو برای: loss minimization

تعداد نتایج: 475131  

2014
J. KALYAN KUMAR I. PRABHAKAR REDDY

This project bestows optimal sizing and placement of FACTS device, which is achieved by the searching method of bacteria foraging algorithm (BFA) with optimal sizing of FACTS device. Static Var Compensator (SVC) is one of the FACTS devices, utilized for progress of voltage profile and loss minimization. The precise strategy of SVC offers the real power loss minimization with increase of voltage...

2015
Andreas Doerr Nathan D. Ratliff Jeannette Bohg Marc Toussaint Stefan Schaal

Inverse Optimal Control (IOC) has strongly impacted the systems engineering process, enabling automated planner tuning through straightforward and intuitive demonstration. The most successful and established applications, though, have been in lower dimensional problems such as navigation planning where exact optimal planning or control is feasible. In higher dimensional systems, such as humanoi...

2010
David A. McAllester Tamir Hazan Joseph Keshet

In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss. In binary classification one typically tries to minimizes the error rate. But in structured prediction each task often has its own measure of performance such as the BLEU score in machine translation or the intersection-over-union score in PASCAL segmentation....

2012
Kevin Gimpel Noah A. Smith

This paper seeks to close the gap between training algorithms used in statistical machine translation and machine learning, specifically the framework of empirical risk minimization. We review well-known algorithms, arguing that they do not optimize the loss functions they are assumed to optimize when applied to machine translation. Instead, most have implicit connections to particular forms of...

2003
Periklis Andritsos Vassilios Tzerpos

The majority of the algorithms in the software clustering literature utilize structural information in order to decompose large software systems. Other approaches, such as using £le names or ownership information, have also demonstrated merit. However, there is no intuitive way to combine information obtained from these two different types of techniques. In this paper, we present an approach th...

2015
Eyke Hüllermeier Weiwei Cheng

In standard supervised learning, each training instance is associated with an outcome from a corresponding output space (e.g., a class label in classification or a real number in regression). In the superset learning problem, the outcome is only characterized in terms of a superset—a subset of candidates that covers the true outcome but may also contain additional ones. Thus, superset learning ...

2010
Yury Audzevich Levente Bodrog Yoram Ofek Miklós Telek

Due to the overall growing demand on the network resources and tight restrictions on the power consumption, the requirements to the long-term scalability, cost and performance capabilities appear together with the deployment of novel switching architectures. The load-balancing switch proposed in [1,2] satisfies to the above requirements due to a simple distributed control and good performance c...

2005
Ralf Schlüter T. Scharrenbach Volker Steinbiss Hermann Ney

In this work, fundamental properties of Bayes decision rule using general loss functions are derived analytically and are verified experimentally for automatic speech recognition. It is shown that, for maximum posterior probabilities larger than 1/2, Bayes decision rule with a metric loss function always decides on the posterior maximizing class independent of the specific choice of (metric) lo...

2015
Sashank J. Reddi Barnabás Póczos Alexander J. Smola

In this paper, we study the problem of empirical loss minimization with l2-regularization in distributed settings with significant communication cost. Stochastic gradient descent (SGD) and its variants are popular techniques for solving these problems in large-scale applications. However, the communication cost of these techniques is usually high, thus leading to considerable performance degrad...

2009
Shai Shalev-Shwartz Ambuj Tewari

We describe and analyze two stochastic methods for `1 regularized loss minimization problems, such as the Lasso. The first method updates the weight of a single feature at each iteration while the second method updates the entire weight vector but only uses a single training example at each iteration. In both methods, the choice of feature/example is uniformly at random. Our theoretical runtime...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید