نتایج جستجو برای: minimum error

تعداد نتایج: 407086  

2011
Artem Sokolov François Yvon

Modern Statistical Machine Translation (SMT) systems make their decisions based on multiple information sources, which assess various aspects of the match between a source sentence and its possible translation(s). Tuning a SMT system consists in finding the right balance between these sources so as to produce the best possible output, and is usually achieved through Minimum Error Rate Training ...

1999
Gunnar Evermann

From the theory of Bayesian pattern recognition it is well known that the maximum a posteriori decision criterion yields a recogniser with the minimum probability of assigning the incorrect label to a pattern, if the correct probability distributions are used. This MAP criterion is also routinely employed in automatic speech recognition system. The problem addressed in this thesis is the fact t...

2009
George F. Foster Roland Kuhn

The most commonly used method for training feature weights in statistical machine translation (SMT) systems is Och’s minimum error rate training (MERT) procedure. A well-known problemwith Och’s procedure is that it tends to be sensitive to small changes in the system, particularly when the number of features is large. In this paper, we quantify the stability of Och’s procedure by supplying diff...

2012
Sheng Chen

The equalisation topic is well researched and a variety of solutions are available. The MAP sequence detector provides the lowest symbol error rate (SER) attainable, and the MLSE offers a near optimal solution. However, these optimal techniques are not yet practical for high-level modulation schemes, due to their computational complexity. Linear equaliser or linear-combiner DFE are practical sc...

2013
Michel Galley Chris Quirk Colin Cherry Kristina Toutanova

Minimum Error Rate Training (MERT) remains one of the preferred methods for tuning linear parameters in machine translation systems, yet it faces significant issues. First, MERT is an unregularized learner and is therefore prone to overfitting. Second, it is commonly used on a noisy, non-convex loss function that becomes more difficult to optimize as the number of parameters increases. To addre...

2000
George Saon Mukund Padmanabhan

We consider the problem of designing a linear transformation 2 IR , of rank p n, which projects the features of a classi er x 2 IR onto y = x 2 IR such as to achieve minimum Bayes error (or probability of misclassi cation). Two avenues will be explored: the rst is to maximize the -average divergence between the class densities and the second is to minimize the union Bhattacharyya bound in the r...

Journal: :CoRR 2017
Badong Chen Lei Xing Nanning Zheng José Carlos Príncipe

Comparing with traditional learning criteria, such as mean square error (MSE), the minimum error entropy (MEE) criterion is superior in nonlinear and non-Gaussian signal processing and machine learning. The argument of the logarithm in Renyis entropy estimator, called information potential (IP), is a popular MEE cost in information theoretic learning (ITL). The computational complexity of IP is...

Journal: :International Journal of Computational Geometry & Applications 1996

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید