Halving Algorithm

The Halving Algorithm is perhaps the simplest algorithm for On-line Prediction. Barzdin and Freivald (1972), Angluin (1988), and Littlestone (1988) are credited with its introduction; the name was suggested by Littlestone (1988).

Suppose the sequence of observations is known to belong to a finite class consisting of {$K$} sequences, which will be called experts. The following algorithm (the Halving Algorithm) makes at most {$\log K$} errors when predicting this sequence. Predict with the majority of the {$K$} experts in the class until you make a mistake. At least half of the experts will also make a mistake; remove them from the class. Predict with the majority of the remaining experts until you make a mistake. Etc.

The Halving Algorithm assumes that one of the experts is perfect ("knows" the sequence to be observed). The Weighted Majority Algorithm extends the Halving Algorithm beyond this case.

Bibliography

  • Dana Angluin. Queries and concept learning. Machine Learning 2:319 - 342, 1988.
  • J. Barzdin and R. Freivald. On the prediction of general recursive functions. Soviet Mathematics Doklady, 13:1224 - 1228, 1972.
  • Nick Littlestone. Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning 2:285 - 318, 1988.