نتایج جستجو برای: training algorithms
تعداد نتایج: 629109 فیلتر نتایج به سال:
Gaussian processes (GPs) are important models in supervised machine learning. Training in Gaussian processes refers to selecting the covariance functions and the associated parameters in order to improve the outcome of predictions, the core of which amounts to evaluating the logarithm of the marginal likelihood (LML) of a given model. LML gives a concrete measure of the quality of prediction th...
New structure and training algorithms of the RBF-type neural network are proposed. An extra neuron layer is added to realize the principal component method. A novel training algorithm is designed for training each separate neuron in the hidden layer, which guarantees the efficiency and finiteness of the training procedure. Results were obtained for a variety of problems. In particular, the resu...
Training neural networks is a complex task of great importance in the supervised learning field of research. In this work we tackle this problem with five algorithms, and try to offer a set of results that could hopefully foster future comparisons by following a kind of standard evaluation of the results (the Prechelt approach). To achieve our goal of studying in the same paper population based...
Multilayered feedforward neural networks possess a number of properties which make them particularly suited to complex pattern classification problems. However, their application to some realworld problems has been hampered by the lack of a training algonthm which reliably finds a nearly globally optimal set of weights in a relatively short time. Genetic algorithms are a class of optimization p...
Systems for text retrieval, routing, categorization and other IR tasks rely heavily on linear classiiers. We propose that two machine learning algorithms, the Widrow-Hoo and EG algorithms, be used in training linear text classiiers. In contrast to most IR methods, theoretical analysis provides performance guarantees and guidance on parameter settings for these algorithms. Experimental data is p...
Stochastic neighbor embedding (SNE) and related nonlinear manifold learning algorithms achieve high-quality low-dimensional representations of similarity data, but are notoriously slow to train. We propose a generic formulation of embedding algorithms that includes SNE and other existing algorithms, and study their relation with spectral methods and graph Laplacians. This allows us to define se...
BACKGROUND Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. METHODS We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, e...
One of the machine learning tasks is supervised learning. In supervised learning we infer a function from labeled training data. The goal of supervised learning algorithms is learning a good hypothesis that minimizes the sum of the errors. A wide range of supervised algorithms is available such as decision tress, SVM, and KNN methods. In this paper we focus on decision tree algorithms. When we ...
Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM) data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید