# Venn Predictor

*Venn Predictor* (or *Venn Machine*) is a multiprobability classification system. The term *multiprobability* means that we announce several probability distributions for the new label rather than a single one.

Venn Predictor divides the old examples into categories, puts the current examples into one of the categories, and then uses the frequencies of labels in the chosen category as probabilities for the current object's label.

We observe a sequence of examples, where each example is a pair (object, label). Objects are drawn from a measurable space called the *object space*, `⚠ $x_n \in \mathbf{X}$`

, and labels are drawn from a measurable space called the *label space*, `⚠ $y_n \in \mathbf{Y}$`

. Their cartesian product, `⚠ $\mathbf{Z} = \mathbf{X} \times \mathbf{Y}$`

is called the *example space*.

Venn Predictor deals with the case of classification, i.e., the case when `⚠ $\mathbf{Y}$`

is finite. Let us consider a general multiprobability prediction protocol. At each step of the protocol Reality announces `⚠ $x_n \in \mathbf{X}$`

, Predictor announces `⚠ $P_n \subseteq \mathbf{P(Y)}$`

(where `⚠ $\mathbf{P(Y)}$`

is the set of all probability distributions on `⚠ $\mathbf{Y}$`

), and then Reality announces `⚠ $y_n \in \mathbf{Y}$`

.

To define Venn Predictor formally, we need to introduce the concept of *taxonomy*. This is a sequence `⚠ $ A_n, n = 1,2,\ldots $`

, where each `⚠ $A_n$`

is a measurable finite partition of the set `⚠ ${\mathbf{Z}}^{n-1}\times\mathbf{Z}$`

. We write `⚠ $A_n(\omega)$`

for the element of the partition `⚠ $A_n$`

that contains `⚠ $\omega \in \mathbf{Z}^{n-1}\times\mathbf{Z}$`

. Venn Predictor can be defined for every taxonomy `⚠ $\left\{A_n\right\}_{n \in \mathbb{N}}$`

. Let us consider the case when we have observed examples `⚠ $(x_1,y_1),(x_2,y_2),\ldots,(x_{n-1},y_{n-1})$`

and a new object, `⚠ $x_n$`

. We denote `⚠ $z_i := (x_i, y_i)$`

. Let us consider the case when the current object has label `⚠ $y$`

and let us write (for now) `⚠ $z_n = (x_n,y)$`

. At each step of the protocol Venn Predictor divides the examples `⚠ $z_1, \ldots, z_n$`

into categories assigning `⚠ $z_i$`

and `⚠ $z_j$`

to the same category if and only if `⚠ $A_n(\{z_1,\ldots,z_{i-1},z_{i+1},\ldots,z_n\},z_i) = A_n(\{z_1,\ldots,z_{j-1},z_{j+1},\ldots,z_n\},z_j)$`

, where `⚠ $\{z_1,\ldots,z_n\}$`

denotes a multiset (a bag), i.e., a set of elements where each element has a multiplicity, i.e., a natural number indicating how many memberships it has in the multiset.

The category `⚠ $T$`

containing `⚠ $z_n=(x_n,y)$`

is nonempty (it contains at least this one element). Let `⚠ $p_y$`

be the empirical probability distribution on the labels in this category: `⚠ $p_y({y'}) := \frac{\left|\{(x^*,y^*) \in T : y^* = y'\}\right|}{|T|}$`

; this is a probability distribution on `⚠ $\mathbf{Y}$`

. Venn Predictor determined by taxonomy `⚠ $\left\{A_n\right\}_{n \in \mathbb{N}}$`

is the multiprobability predictor `⚠ $P_n := \{p_y : y \in \mathbf{Y}\}$`

. The set `⚠ $P_n$`

consists of between one and `⚠ $|\mathbf{Y}|$`

distinct probability distributions on `⚠ $\mathbf{Y}$`

.

There are many Venn Predictors, one for each taxonomy. Some of them perform better then the others on particular datasets.