For processing massive data streams, most proposed algorithmic methods look at each new item, perform a small number of operations while keeping a small amount of memory, and still perform much-needed analyses. However, in many situations, the update speed per item is very critical and not every item can be extensively examined. In practice, this has been addressed by sampling only a subset of ...