نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

2016
Nitish Shirish Keskar Albert S. Berahas

Recurrent Neural Networks (RNNs) are powerful models that achieve unparalleled performance on several pattern recognition problems. However, training of RNNs is a computationally difficult task owing to the well-known “vanishing/exploding” gradient problems. In recent years, several algorithms have been proposed for training RNNs. These algorithms either: exploit no (or limited) curvature infor...

Journal: :Medical image analysis 2008
Erik Dam P. Thomas Fletcher Stephen M. Pizer

We present a novel method for automatic shape model building from a collection of training shapes. The result is a shape model consisting of the mean model and the major modes of variation with a dense correspondence map between individual shapes. The framework consists of iterations where a medial shape representation is deformed into the training shapes followed by computation of the shape me...

2010
George Saon Hagen Soltau

We employ a variant of the popular Adaboost algorithm to train multiple acoustic models such that the aggregate system exhibits improved performance over the individual recognizers. Each model is trained sequentially on re-weighted versions of the training data. At each iteration, the weights are decreased for the frames that are correctly decoded by the current system. These weights are then m...

Journal: :CoRR 2012
Jia Zeng Zhi-Qiang Liu Xiao-Qin Cao

Latent Dirichlet allocation (LDA) is a widely-used probabilistic topic modeling paradigm, and recently finds many applications in computer vision and computational biology. In this paper, we propose a fast and accurate batch algorithm, active belief propagation (ABP), for training LDA. Usually batch LDA algorithms require repeated scanning of the entire corpus and searching the complete topic s...

2018
Lisa Lee Emilio Parisotto Devendra Singh Ruslan Salakhutdinov

Our motivation is to scale value iteration to larger environments without a huge increase in computational demand, and fix the problems inherent to Value Iteration Networks (VIN) such as spatial invariance and unstable optimization. We show that VINs, and even extended VINs which improve some of their shortcomings, are empirically difficult to optimize, exhibiting instability during training an...

Journal: :journal of linear and topological algebra (jlta) 0
m amirfakhrian department of mathematics, islamic azad university, central tehran branch, po. code 14168-94351, iran. f mohammad department of mathematics, islamic azad university, central tehran branch, po. code 14168-94351, iran.

in this paper, we represent an inexact inverse subspace iteration method for com- puting a few eigenpairs of the generalized eigenvalue problem ax = bx[q. ye and p. zhang, inexact inverse subspace iteration for generalized eigenvalue problems, linear algebra and its application, 434 (2011) 1697-1715 ]. in particular, the linear convergence property of the inverse subspace iteration is preserved.

2017
Max Pflueger

Value iteration networks are an approximation of the value iteration (VI) algorithm implemented with convolutional neural networks to make VI fully differentiable. In this work, we study these networks in the context of robot motion planning, with a focus on applications to planetary rovers. The key challenging task in learningbased motion planning is to learn a transformation from terrain obse...

Journal: :Pattern Recognition 2016
Weixin Yang Lianwen Jin Dacheng Tao Zecheng Xie Ziyong Feng

Inspired by the theory of Leitner’s learning box from the field of psychology, we propose DropSample, a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample, each training sample is associated with a quota function that is dynamically adjusted on the basis...

1999
Jerome H. Friedman

Gradient boosting constructs additive regression models by sequentially tting a simple parameterized function (base learner) to current \pseudo"{residuals by least{squares at each iteration. The pseudo{residuals are the gradient of the loss functional being minimized, with respect to the model values at each training data point, evaluated at the current step. It is shown that both the approxima...

Journal: :Procesamiento del Lenguaje Natural 2006
Fernando Enríquez José Antonio Troyano Jiménez Fermín L. Cruz F. Javier Ortega

The availability of extense tagged data corpus is an essential aspect in many NLP tasks. The effort required for tagging manually this large number of phrases has encouraged many researchers like us to create automatic applications for this issue. Our approach represents a completely automatic method (optionally applying a minimum effort) for enlarging an already existing corpus, so it acquires...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید