Validity is a desideratum for on-line prediction which is usually attained automatically by conformal predictors and their modifications. For conformal predictors in the batch prediction protocol, validity means the right coverage probability. In the on-line prediction protocol, validity includes, additionally, the independence of errors. Inductive conformal predictors enjoy the same properties of validity as conformal predictors. For a nice illustration of the property of validity, see Wasserman (2011, Figure 1; see also Vovk 2013, Figure 11.1).

The notion of validity in conformal prediction is non-asymptotic: the coverage probability should be exactly right. This is in the spirit of Fisher's first and most important book Statistical Methods for Research Workers, whose preface says:

Little experience is sufficient to show that the traditional machinery of statistical processes is wholly unsuited to the needs of practical research. Not only does it take a cannon to shoot a sparrow, but it misses the sparrow! The elaborate mechanism built on the theory of infinitely large samples is not accurate enough for simple laboratory data. Only by systematically tackling small sample problems on their merits does it seem possible to apply accurate tests to practical data. Such at least has been the aim of this book.

(Fisher 1925; quoted by Hald 1998, Section 28.11).

The frequentist nature of the property of validity

Conformal predictors have the right coverage probability, and this property is often referred to as frequentist (as opposed to stronger Bayesian properties of validity, which require stronger assumptions). Shafer (2017, Section 5.2) argues that this terminology is not justified. However, in the case of conformal prediction and the on-line prediction protocol, the property of validity is frequentist, since the errors at different trials are made independently. The right frequencies show as straight lines on the usual validity plots: see, e.g., OCM WP 1, Figure 3, and OCM WP 2, Figure 2.


  • Anders Hald (1998). A History of Mathematical Statistics from 1750 to 1930. Wiley, New York.
  • Glenn Shafer (2017). Bayesian, Fiducial, Frequentist. Game-theoretic probability project, Working Paper 50.
  • Vladimir Vovk (2013). Kernel Ridge Regression. In: Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik (ed. nu Bernhard Scholkopf, Zhiyuan Luo, and Vladimir Vovk). Chapter 11. Springer, Berlin.
  • Larry Wasserman (2011). Frasian Inference. Statistical Science 26:322325.