📜 ⬆️ ⬇️

DDR for the head, or how our memory works.


Several years ago, as a result of analysis of the results of instrumental studies of the brain, foreign scientists created a new model of the human memory, with which most experts agreed. But the language barrier has become an obstacle to the dissemination of this information, and there are practically no translations and references to this in Russian.
From this model, it follows that the term “short-term memory” is only a convenient scientific abstraction that has no physiological equivalents. He is considered obsolete, and Miller’s theory (1956) about its capacity of 7 + \ - 2 elements is considered to be just an abstract theoretical model for conveniently explaining the results of his experiments.
New research yielded results that made it possible to create a model for the previously introduced term “working memory” - by analogy with the computer’s RAM. Cowan (2001) found that the number of information elements with which the brain can simultaneously perform logical operations is four, and this corresponds to the real instrument-fixed processes in the brain. Further development of this theory was given by Oberauer (2002), who established that only one of these four elements can be active at a specific moment in time (confirming, by chance, the theory of the dominant of Ukhtomsky (1926)).
But how does this agree with Miller's experimental data, repeatedly re-checked and confirmed by other experimenters? This question is answered by the theory of uniting structures into the “long-term working memory” of Erickson and Kintsh (1995). When the working memory is fully loaded, these 4 elements are combined into one structure and unloaded into such memory, a kind of “swap file”, taking up only one space in the working memory for the address of this structure, and additional 3 elements are loaded into the free three places 7 magic elements of "short-term memory"!
Thus, our working memory resembles the simplest arithmetic calculator - two cells for the original data, one for the operand and one for the final result. This is confirmed by the theory of D. Sweller’s cognitive overload (1998), which gained great popularity and proved experimentally that the ability to perform logical operations on its elements drops sharply when the working memory is full.
For example, analyze the phrase:
“Memory from TRANSCEND at 800MHz is cheaper than memory from Kingston at 667Mhz, but memory from Hynix is ​​512MB more expensive than memory from TRANSCEND by 1 GB.”
There are only seven elements in this phrase, but since logical operations on them must be performed simultaneously, it causes noticeable cognitive discomfort. Similarly, the simplest task for younger students:
“One and a half fishermen caught one and a half perch in one and a half days. How many zander catch 6 fishermen in 6 days "

when resolving in the mind, it causes insurmountable difficulties for 98% of the adult educated population, since the amount of data that must be kept in memory to solve it exceeds the capacity of the working memory, making logical conclusions impossible. And on paper, its solution is elementary, as it allows you to perform operations sequentially, without overloading the working memory.
This model makes it possible to take a completely different look at the basic teaching methods and techniques, and finally realize the reasons for the ineffectiveness of most of them and the unexpected effectiveness of others, completely unobvious and paradoxical.
More information about these models can be found in “Wikipedia” or in an attempt to translate . It will be useful to try to apply them to the analysis of methods of teaching foreign languages.

')

Source: https://habr.com/ru/post/11116/


All Articles