SimplicityTheory |
Simplicity, Complexity, Unexpectedness, Cognition, Probability, Information
by Jean-Louis Dessalles
(created 31 December 2008)
(updated July 2015)
Unexpectedness requires simple concepts |
The complexity of situations or events is the minimal description that makes them unique. The window on the right may be felt as simple (it is a window, quite a simple thing, isn’t it?).
No, it is not!
Kolmogorov complexity in general and also in the particular context of Simplicity Theory deals with minimal descriptions of specific objects or events, not of classes. For instance, the complexity of 131072 is not the complexity of the class of 6-digit numbers, nor of the class of even numbers. It is the minimal amount of information thanks to which the observer can individuate the actual number (e.g. by saying that it is 2^17).
The complexity of this window is not small, because we still need to individuate it among all other windows.
The fact that the object on the right is a window can be used as a starting point to compute both generation complexity Cw and description complexity C.
Cw(s) < Cw(f(s)) + Cw(s|f(s))
C(s) < C(f) + C(s|f(s))
The term C(f) may be called conceptual complexity of f for the observer in the current context.
Some concepts are more popular than others, and are thus simpler. For most people, for instance, the giant Panda (Ailuropoda melanoleuca, here shown at the San Diego zoo) is a simpler concept than the Arabian babbler (Turdoides squamiceps). Imagine that we store our concepts in lists or trees or graphs; then the complexity of a given concept can be assessed by the logarithm of the rank of the concept in a list or the length of the shortest path to the concept in a graph). An alternative (somewhat crude) way to get an idea of the complexity of concepts consists in ranking them according to the number of hits on a Web search engine (I got 69500 hits for Ailuropoda melanoleuca and 30900 for Turdoides squamiceps), and then to take the logarithm of the rank.
(photo Quentin Dessalles) |
(from there) |
C(f) is the description cost one has to ‘pay’ when introducing feature f. Note that it may make C(s) > Cw(s) and thus unexpectedness U(s) negative.
Narratives accumulate features fi when describing unexpected situations:
C(Πfi*s) = C(f1) + C(f2|f1) + C(f3|f1&f2) ... + C(s|&fi)
Dessalles, J-L. (2013). Algorithmic simplicity and relevance. In D. L. Dowe (Ed.), Algorithmic probability and friends - LNAI 7070, 119-130. Berlin, D: Springer Verlag.