Twice Universal Linear Prediction of Individual Sequences - Information Theory, 1998. Proceedings. 1998 IEEE International Symposium on
نویسنده
چکیده
We present a “twice universal” linear prediction algorithm over the unknown parameters and model orders, in which the sequentially accumulated square prediction error is as good as any linear predictor of order up to some M , for any individual sequence. The extra loss comprises of a parameter “redundancy” term proportional to (p/2)n-’ In(h), and a model order “redundancy” term proportional to n-’ In@), where p is the model order we compare with, and n is the data length. The computational complexity of the algorithm is about the complexity of a recursive least squares (RLS) linear predictor of order M.
منابع مشابه
Twice Universal Linear Prediction of Individual Sequences
We present a linear prediction algorithm which is \twice universal," over parameters and model orders, for individual sequences under the square-error loss function. The sequentially accumulated mean-square prediction error is as good as any linear predictor of order up to some M. Following an approach taken in many prediction problems we transform the linear prediction problem into a sequentia...
متن کاملUniversal Data Compression and Linear Prediction
The relationship between prediction and data compression can be extended to universal prediction schemes and universal data compression. Recent work shows that minimizing the sequential squared prediction error for individual sequences can be achieved using the same strategies which minimize the sequential codelength for data compression of individual sequences. De ning a \probability" as an ex...
متن کاملFinite memory universal predictability of binary sequences - Information Theory, 2003. Proceedings. IEEE International Symposium on
The problem of predicting the next outcome of an individual binary sequence under the constraint that the universal predictor has a finite memory, is explored. The loss function considered is the regular prediction (0 1, or Hamming distance) loss and the-main reference class is the set of constant predictors. We analyze the performance of deterministic timeinvariant K-state universal predictors...
متن کاملUniversal Prediction
This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with...
متن کاملOptimal Sequential Vector Quantization of Markov Sources - Information Theory, 1998. Proceedings. 1998 IEEE International Symposium on
The problem of optimal sequential vector quantization of Markov sources is cast as a stochastic control problem wi th partial observations and constraints, leading to useful existence results for optimal codes and their characterizations.
متن کامل