نتایج جستجو برای: batch and online learning

تعداد نتایج: 16981315  

2009
Chih-Chieh Cheng Fei Sha Lawrence K. Saul

We propose an online learning algorithm for large margin training of continuous density hidden Markov models. The online algorithm updates the model parameters incrementally after the decoding of each training utterance. For large margin training, the algorithm attempts to separate the log-likelihoods of correct and incorrect transcriptions by an amount proportional to their Hamming distance. W...

2003
Shai Shalev-Shwartz Koby Crammer Ofer Dekel Yoram Singer

We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the non-realizable case. A conversion of our main online algorithm to the setting of batch learning is also discussed. The end result is new algori...

Journal: :CoRR 2017
Chandan Gautam Aruna Tiwari Sundaram Suresh Kapil Ahuja

This paper presents an online learning with regularized kernel based one-class extreme learning machine (ELM) classifier and is referred as “online RK-OC-ELM”. The baseline kernel hyperplane model considers whole data in a single chunk with regularized ELM approach for offline learning in case of one-class classification (OCC). Further, the basic hyper plane model is adapted in an online fashio...

Journal: :Journal of Machine Learning Research 2008
Michael Collins Amir Globerson Terry Koo Xavier Carreras Peter L. Bartlett

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, whe...

Journal: :CoRR 2015
Chencheng Li Pan Zhou

Online learning has been in the spotlight from the machine learning society for a long time. To handle massive data in Big Data era, one single learner could never efficiently finish this heavy task. Hence, in this paper, we propose a novel distributed online learning algorithm to solve the problem. Comparing to typical centralized online learner, the distributed learners optimize their own lea...

2007
Nathan D. Ratliff J. Andrew Bagnell Martin A. Zinkevich

Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, a...

2010
Ofer Dekel Claudio Gentile Karthik Sridharan

We present a new online learning algorithm in the selective sampling framework, where labels must be actively queried before they are revealed. We prove bounds on the regret of our algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. Our bounds both generalize and strictly improve over previous bounds in similar settings....

Journal: :CoRR 2015
Dayong Wang Pengcheng Wu Peilin Zhao Steven C. H. Hoi

The amount of data in our society has been exploding in the era of big data today. In this paper, we address several open challenges of big data stream classification, including high volume, high velocity, high dimensionality, high sparsity, and high class-imbalance. Many existing studies in data mining literature solve data stream classification tasks in a batch learning setting, which suffers...

2011
Shai Shalev-Shwartz

In this lecture we describe a different model of learning which is called online learning. Online learning takes place in a sequence of consecutive rounds. To demonstrate the online learning model, consider again the papaya tasting problem. On each online round, the learner first receives an instance (the learner buys a papaya and knows its shape and color, which form the instance). Then, the l...

2017
Khanh Nguyen

Max-margin and kernel methods are dominant approaches to solve many tasks in machine learning. However, the paramount question is how to solve model selection problem in these methods. It becomes urgent in online learning context. Grid search is a common approach, but it turns out to be highly problematic in real-world applications. Our approach is to view max-margin and kernel methods under a ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید