نتایج جستجو برای: correct
تعداد نتایج: 120721 فیلتر نتایج به سال:
The PAC-learning model is distribution-independent in the sense that the learner must reach a learning goal with a limited number of labeled random examples without any prior knowledge of the underlying domain distribution. In order to achieve this, one needs generalization error bounds that are valid uniformly for every domain distribution. These bounds are (almost) tight in the sense that the...
One can learn any hypothesis class H with O(log |H|) labeled examples. Alas, learning with so few examples requires saving the examples in memory, and this requires |X | memory states, where X is the set of all labeled examples. This motivates the question of how many labeled examples are needed in case the memory is bounded. Previous work showed, using techniques such as linear algebra and Fou...
We introduce a framework for class noise, in which most of the known class noise models for the PAC setting can be formulated. Within this framework, we study properties of noise models that enable learning of concept classes of finite VC-dimension with the Empirical Risk Minimization (ERM) strategy. We introduce simple noise models for which classical ERM is not successful. Aiming at a more ge...
This paper surveys some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) 1 earning, introduced by Valiant. We define this learning model and then look at sorne of the results obtained in it. We then consider some criticisms of the PAC model and the extensions proposed to address these criti...
Abstract This paper shows that one cannot learn the probability of rare events without imposing further structural assumptions. The event of interest is that of obtaining an outcome outside the coverage of an i.i.d. sample from a discrete distribution. The probability of this event is referred to as the “missing mass”. The impossibility result can then be stated as: the missing mass is not dist...
Lévy processes play an important role in the stochastic process theory. However, since samples are non-i.i.d., statistical learning results based on the i.i.d. scenarios cannot be utilized to study the risk bounds for Lévy processes. In this paper, we present risk bounds for non-i.i.d. samples drawn from Lévy processes in the PAC-learning framework. In particular, by using a concentration inequ...
Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. of empirical and explanation-based learning algo
It is best to learn a large theory in small pieces. An approach called \layered learning" starts by learning an approximately correct theory. The errors of this approximation are then used to construct a second-order \correcting" theory, which will again be only approximately correct. The process is iterated until some desired level of overall theory accuracy is met. The main advantage of this ...
This theoretical paper is concerned with a rigorous non-asymptotic analysis of relational learning applied to a single network. Under suitable and intuitive conditions on features and clique dependencies over the network, we present the first probably approximately correct (PAC) bound for maximum likelihood estimation (MLE). To our best knowledge, this is the first sample complexity result of t...
Science ultimately seeks to reliably predict aspects of the future; but, how is this even possible in light of the logical paradox that making a prediction may cause the world to evolve in a manner that defeats it? We show how learning can naturally resolve this conundrum. The problem is studied within a causal or temporal version of the Probably Approximately Correct semantics, extended so tha...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید