📜 ⬆️ ⬇️

Algorithmic Information Theory and Randomness of Individual Objects

The concept of entropy in the middle of the XX century was introduced by Claude Shannon . It can be intuitively described as "the average number of bits of information in one value of a random variable." But it cannot be applied to individual objects (say, to the text of a novel or DNA) - where there is no ensemble of many homogeneous objects, there are no random variables either.



In the mid-1960s, it became clear to different people (Kolmogorov, Solomonov, Levin, Chaytin) that the amount of information (complexity) of an individual object can be defined as the minimum program length that this object generates (with natural restrictions on the programming language). An algorithmic theory of information emerged that turned out to be connected with different areas: from the philosophical questions of the foundations of the theory of probability (when do we reject statistical hypotheses?) To combinatorics (inequalities connecting the sizes of sets and their projections) and the theory of computability.
')
The lecture, which we have chosen for you today, was given by the famous mathematician Alexander Shen at the HSE Faculty of Computer Science. Once, under the leadership of Vladimir Uspensky , a student of Kolmogorov, he defended his thesis "Algorithmic variants of the concept of entropy."

The lecture tells about the basic definitions and some basic results. Alexander Shen graduated from Moscow State University. Now he is a member of the Institute for Information Transmission Problems. A.A. Kharkevich RAS and Laboratory of the National Center for Scientific Research of France. Research interests: algorithms, Kolmogorov complexity, logic, information theory. Almost all the books that Alexander Khanievich wrote about mathematics and programming are freely available .

Source: https://habr.com/ru/post/261761/


All Articles