نتایج جستجو برای: training algorithms

تعداد نتایج: 629109  

2007
Bill B. Wang R I. McKay Hussein A. Abbass Michael Barlow

We present a novel method employing a hierarchical domain ontology structure to select features representing documents. All raw words in the training documents are mapped to concepts in a domain ontology. Based on these concepts, a concept hierarchy is established for the training document space, using is-a relationships defined in the domain ontology. An optimum concept set may be obtained by ...

In this paper, extractive speech summarization using different machine learning algorithms was investigated. The task of Speech summarization deals with extracting important and salient segments from speech in order to access, search, extract and browse speech files easier and in a less costly manner. In this paper, a new method for speech summarization without using automatic speech recognitio...

2013
Ahmed A. Abusnaina Rosni Abdullah

Training an artificial neural network (ANN) is an optimization task since it is desired to find optimal neurons‘ weight of a neural network in an iterative training process. Traditional training algorithms have some drawbacks such as local minima and its slowness. Therefore, evolutionary algorithms are utilized to train neural networks to overcome these issues. This research tackles the ANN tra...

1997
Nicolaos B. Karayiannis Glenn Weiqun Mi

This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network grows by splitting one of the prototypes at each growing cycle. Two splitting criteria a...

Journal: :CoRR 2012
Jia Zeng Zhi-Qiang Liu Xiao-Qin Cao

Latent Dirichlet allocation (LDA) is a widely-used probabilistic topic modeling paradigm, and recently finds many applications in computer vision and computational biology. In this paper, we propose a fast and accurate batch algorithm, active belief propagation (ABP), for training LDA. Usually batch LDA algorithms require repeated scanning of the entire corpus and searching the complete topic s...

Journal: :Journal of Machine Learning Research 2010
Pedro A. Forero Alfonso Cano Georgios B. Giannakis

This paper develops algorithms to train support vector machines when training data are distributed across different nodes, and their communication to a centralized processing unit is prohibited due to, for example, communication complexity, scalability, or privacy reasons. To accomplish this goal, the centralized linear SVM problem is cast as a set of decentralized convex optimization subproble...

Journal: :CoRR 2013
Alexandros Ntoulas Omar Alonso Vasileios Kandylas

As the number of applications that use machine learning algorithms increases, the need for labeled data useful for training such algorithms intensifies. Getting labels typically involves employing humans to do the annotation, which directly translates to training and working costs. Crowdsourcing platforms have made labeling cheaper and faster, but they still involve significant costs, especiall...

Journal: :Journal of Machine Learning Research 2005
Günther Eibl Karl Peter Pfeiffer

AdaBoost.M2 is a boosting algorithm designed for multiclass problems with weak base classifiers. The algorithm is designed to minimize a very loose bound on the training error. We propose two alternative boosting algorithms which also minimize bounds on performance measures. These performance measures are not as strongly connected to the expected error as the training error, but the derived bou...

2012
Koolen Ruud Krahmer Emiel Theune Mariët

One important subtask of Referring Expression Generation (REG) algorithms is to select the attributes in a definite description for a given object. In this paper, we study how much training data is required for algorithms to do this properly. We compare two REG algorithms in terms of their performance: the classic Incremental Algorithm and the more recent Graph algorithm. Both rely on a notion ...

2011
Sudhir Kumar Sharma Pravin Chandra

The generalization capability and training time of conventional neural networks depend on their architecture. In conventional neural networks, we have to define the architecture prior to training but in constructive neural network (CoNN) algorithms the network architecture is constructed during the training process. This paper presents an overview of CoNN algorithms that constructing feedforwar...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید