Simplicity, Complexity, Unexpectedness, Cognition, Probability, Information

by Jean-Louis Dessalles
(created 31 December 2008)
(updated July 2015) # Conceptual Complexity

 Unexpectedness requires simple concepts

The complexity of situations or events is the minimal description that makes them unique. The window on the right may be felt as simple (it is a window, quite a simple thing, isn’t it?).
No, it is not! Kolmogorov complexity in general and also in the particular context of Simplicity Theory deals with minimal descriptions of specific objects or events, not of classes. For instance, the complexity of 131072 is not the complexity of the class of 6-digit numbers, nor of the class of even numbers. It is the minimal amount of information thanks to which the observer can individuate the actual number (e.g. by saying that it is 2^17).

The complexity of this window is not small, because we still need to individuate it among all other windows.

The fact that the object on the right is a window can be used as a starting point to compute both generation complexity Cw and description complexity C.

## Generation complexity C

Situations or events correspond to states of the world. As such, they cannot be grasped in every detail. This is not a problem, however, as we can focus on specific aspects of a given situation. Consider a feature f, like "being a window", that applies to situation s. Considering f as a logical predicate, this means that f(s) is regarded as true. Generation complexity may be computed through the computation sequence f*s:

Cw(s) < Cw(f(s)) + Cw(s|f(s))

When f(s) is already the case, Cw(f(s)) = 0. If f means ‘being broken’, some causal scenario may be required to make f(s) true, and then Cw(f(s)) > 0.

## Description complexity C

The description complexity of s may go through the same computation sequence f*s. The computation is however slightly different.

C(s) < C(f) + C(s|f(s))

In contrast with generation, f(s) needs only to be described through a description of f and not to be generated as a fact.

The term C(f) may be called conceptual complexity of f for the observer in the current context.

Some concepts are more popular than others, and are thus simpler. For most people, for instance, the giant Panda (Ailuropoda melanoleuca, here shown at the San Diego zoo) is a simpler concept than the Arabian babbler (Turdoides squamiceps). Imagine that we store our concepts in lists or trees or graphs; then the complexity of a given concept can be assessed by the logarithm of the rank of the concept in a list or the length of the shortest path to the concept in a graph). An alternative (somewhat crude) way to get an idea of the complexity of concepts consists in ranking them according to the number of hits on a Web search engine (I got 69500 hits for Ailuropoda melanoleuca and 30900 for Turdoides squamiceps), and then to take the logarithm of the rank.  (photo Quentin Dessalles) (from there)

C(f) is the description cost one has to ‘pay’ when introducing feature f. Note that it may make C(s) > Cw(s) and thus unexpectedness U(s) negative.

Narratives accumulate features fi when describing unexpected situations:

Cfi*s) = C(f1) + C(f2|f1) + C(f3|f1&f2) ... + C(s|&fi)

sometimes until uniqueness is reached: C(s|&fi) = 0. For most situations, the complexity of defining s through features C(f1) + C(f2|f1) + C(f3|f1&f2) ...+ C(s|&fi) is no less than Cw(s|&fi), which means that s couldn’t be described as unexpected using fi: U(s) = 0. Interesting objects are unexpected. Relevant features are those which contribute to this unexpectedness by making C smaller than Cw. Correlatively, irrelevant features contribute to a diminution of U. In narratives, irrelevant features provoke reactions of incomprehension.

## Bibliography

Dessalles, J-L. (2008). La pertinence et ses origines cognitives - Nouvelles théories. Paris: Hermes-Science Publications.

Dessalles, J-L. (2013). Algorithmic simplicity and relevance. In D. L. Dowe (Ed.), Algorithmic probability and friends - LNAI 7070, 119-130. Berlin, D: Springer Verlag. 