نتایج جستجو برای: Training iteration
تعداد نتایج: 358779 فیلتر نتایج به سال:
cerebellar model articulation controller neural network is a computational model of cerebellum which acts as a lookup table. the advantages of cmac are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. in the training phase, the disadvantage of some cmac models is unstable phenomenon...
This paper proposes an implicit surface reconstruction algorithm based on Self-Organising Maps (SOMs). The SOM has the connectivity of a regular 3D grid, each node storing its signed distance from the surface. At each iteration of the basic algorithm, a new training set is created by sampling regularly along the normals of the input points. The main training iteration consists of a competitive ...
It is well known that the boosting-like algorithms, such as AdaBoost and many of its modifications, may over-fit the training data when the number of boosting iteration becomes large. Therefore, how to stop a boosting algorithm at an appropriate iteration time is a longstanding problem for the past decade (see Meir and Rastch (2003)). Bühlmann and Yu (2005) apply model selection criteria to est...
Since manual labelling of huge data sets is costly and time-consuming, we propose a framework for iterative confidence-based self-learning of classifiers which autonomously extends its knowledge gained based on a small amount of initial, manually labelled training samples towards increasingly different representatives of the regarded pattern classes. During each iteration, the labels of newly s...
In this paper a new algorithm is proposed for fast discriminative training of hidden Markov models (HMMs) based on minimum classification error (MCE). The algorithm is able to train acoustic models in a few iterations, thus overcoming the slow training speed typical of discriminative training methods based on gradient-descendent. The algorithm tries to cancel the gradient of the objective funct...
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...
We consider empirical risk minimization for large-scale datasets. We introduce Ada Newton as an adaptive algorithm that uses Newton’s method with adaptive sample sizes. The main idea of Ada Newton is to increase the size of the training set by a factor larger than one in a way that the minimization variable for the current training set is in the local neighborhood of the optimal argument of the...
Distributed deep learning systems effectively respond to the increasing demand for large-scale data processing in recent years. However, significant investment building distributed with powerful computing nodes places a huge financial burden on developers and researchers. It will be good predict precise benefit, i.e., how many times of speedup it can get compared training single machine (or few...
on- the-job training on of the most effective tools for managers to cope with the changing organizational environment. it grantess suitable services to customers, particularly in public service enterprises. if such training os goal oriented, planned systematically, and tailored to the employees, job content, then not only it could increase employees and organizational performance, but also it c...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید