نتایج جستجو برای: average error

تعداد نتایج: 608296  

Journal: :IEEE Trans. Signal Processing 2001
Timothy N. Davidson

In this paper, a large and flexible set of computationally efficient algorithms is developed for the design of waveforms for pulse amplitude modulation that provide robust performance in the presence of uncertainties in the channel and noise models. Performance is measured either by a sensitivity function for threshold detection or by the mean square error of the data estimate. For uncertaintie...

1999
Jianfeng Feng

Abs t r ac t . A novel approach to estimate generalisation errors of the simple perceptron of the worst case is introduced. It is well known that the generaiisation error of the simple perceptron is of the form d# with an unknown constant d which depends only on the dimension of inputs, where t is the number of learned examples. Based upon extreme value theory in statistics we obtain an exact f...

Journal: :J. UCS 1997
Walter Krämer

Rigorous a priori error bounds for oating-point computations are derived. We will show that using interval tools in combination with function and operator overloading such bounds can be computed on a computer automatically in a very convenient way. The bounds are of worst case type. They hold uniformly for the speci ed domain of input values. That means, whenever the oating point computation is...

2015
Anqi Liu Lev Reyzin Brian D. Ziebart

Existing approaches to active learning are generally optimistic about their certainty with respect to data shift between labeled and unlabeled data. They assume that unknown datapoint labels follow the inductive biases of the active learner. As a result, the most useful datapoint labels—ones that refute current inductive biases— are rarely solicited. We propose a shift-pessimistic approach to a...

Journal: :IEEE Trans. Information Theory 1988
Patrick Stevens

Absrrucr -The BCH algorithm can be extended to correct more errors than indicated by the BCH bound. In the first step of the decoding procedure, we correct a number of errors, corresponding to a particular case of the Hartmann-Tzeng bound. In the second step we aim at full error correction. A measure for the worst-case number of field elements of an extension field GF(2") that must be tested fo...

Journal: :J. Complexity 2014
Josef Dick Peter Kritzer Friedrich Pillichshammer Henryk Wozniakowski

We study multivariate L2-approximation for a weighted Korobov space of analytic periodic functions for which the Fourier coefficients decay exponentially fast. The weights are defined, in particular, in terms of two sequences a = {aj} and b = {bj} of positive real numbers bounded away from zero. We study the minimal worst-case error eL2−app,Λ(n, s) of all algorithms that use n information evalu...

2007
Tijl De Bie John Shawe-Taylor

The multiple hypothesis testing (MHT) problem has long been tackled by controlling the family-wise error rate (FWER), which is the probability that any of the hypotheses tested is unjustly rejected. The best known method to achieve FWER control is the Bonferroni correction, but more powerful techniques such as step-up and step-down methods exist. A particular challenge to be dealt with in MHT p...

2011
Richard A. Ashley Douglas M. Patterson

Jiang and and Tian (2010) have estimated an ARFIMA model for stock return volatility. We argue that this result does not imply actual 'long memory' in such time series -as any kind of instability in the population mean yields apparent fractional integration as a statistical artifact. Alternative high-pass filters for studying stock market volatility data are suggested.

Journal: :Foundations of Computational Mathematics 2005
Ernesto De Vito Andrea Caponnetto Lorenzo Rosasco

We investigate the problem of model selection for learning algorithms depending on a continuous parameter. We propose a model selection procedure based on a worst case analysis and data-independent choice of the parameter. For regularized least-squares algorithm we bound the generalization error of the solution by a quantity depending on few known constants and we show that the corresponding mo...

Journal: :CoRR 2018
Mohamed Hajaj Duncan Fyfe Gillies

Deep convolution networks have proved very successful with big datasets such as the 1000-classes ImageNet. Results show that the error rate increases slowly as the size of the dataset increases. Experiments presented here may explain why these networks are very effective in solving big recognition problems. If the big task is made up of multiple smaller tasks, then the results show the ability ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید