منابع مشابه
Weighing Hypotheses: Incremental Learning from Noisy Data
Incremental learning from noisy data presents dual challenges: that of evaluating multiple hypotheses incrementally and that of distinguishing errors due to noise from errors due to faulty hypotheses. This problem is critical in such areas of machine learning as concept learning, inductive programming, and sequence prediction. I develop a general, quantitative method for weighing the merits of ...
متن کاملIncremental Learning from Positive Data
The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every innnite sequence of elements exhausting c is called positive presentation of c. An algorith-mic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time, and outputs a new hy...
متن کاملIterative Concept Learning from Noisy Data Iterative Concept Learning from Noisy Data
In the present paper, we study iterative learning of indexable concept classes from noisy data. We distinguish between learning from positive data only and learning from positive and negative data; synonymously, learning from text and informant, respectively. Following 20], a noisy text (a noisy informant) for some target concept contains every correct data item innnitely often while in additio...
متن کاملIncremental Bayes learning with prior evolution for tracking nonstationary noise statistics from noisy speech data
In this paper, a new approach to sequential estimation of the timevarying prior parameters of nonstationary noise is presented using the log-spectral or cepstral data of the corrupted noisy speech. Incremental Bayes learning is developed to provide a basis for noise prior evolution, recursively updating the noise prior statistics (mean and variance) using the approximate Gaussian posterior comp...
متن کاملLearning GP-trees from Noisy Data
We discuss the problem of model selection in Genetic Programming using the framework provided by Statistical Learning Theory, i.e. Vapnik-Chervonenkis theory (VC). We present empirical comparisons between classical statistical methods (AIC, BIC) for model selection and the Structural Risk Minimization method (based on VC-theory) for symbolic regression problems. Empirical comparisons of differe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 1986
ISSN: 0885-6125,1573-0565
DOI: 10.1007/bf00116895