📜 ⬆️ ⬇️

How to calculate crazy - 2: brilliance and poverty pathopsychological diagnosis

Hi, Habr!

My name is Christina, I am a clinical psychologist. About two years ago, I published a post on Geektimes about pathopsychological diagnostics, in which I talked about how a study of the psyche of patients in a psychiatric hospital is carried out to determine the presence or absence of impaired thinking, memory and other mental functions.

image
Everything you need to know about our pathopsychological diagnosis. Source: [43, pp. 133-134].
')
In the comments I was asked quite a few questions about how reliable the methods used are, where their application limits are, etc. In my answers, I consistently defended the tools used and psychiatry / psychology in general.

Two years have passed, and I am no longer the naive specialist who has just emerged from the state system of psychiatric care; I managed to work in private practice, working closely with a psychiatrist and being able to see those aspects of psychiatry that were previously hidden from me. My views on the mental health industry have changed somewhat.

Today's post will not be so enthusiastic and, perhaps, somewhat more emotional. It will contain a certain amount of insider information, to which, unfortunately, I cannot cite proofs, but I will try to substantiate my arguments with key references at key points. Let's just say that everything in the industry is not as rosy as I would like, and I believe that it is necessary to speak openly about its problems.

Post written in collaboration with the user hdablin . The text turned out to be quite long, there are few pictures and many lengthy arguments. But if you are interested in the problems of diagnosis in psychiatry and related disciplines, welcome under cat.

An integrated approach to the diagnosis of mental illness


A comprehensive approach is important in the diagnosis of mental illness [1, p. 22], and there is much talk about this. Indeed, an assessment of the patient’s condition should not be made on the basis of the presence or absence of individual signs of mental disorder, it is necessary to perceive a person completely, in all the diversity of his mental activity, building up a consistently refined, internally consistent model of clinical functioning and / or pathopsychological experiment, supplementing and modifying it as new diagnostic data become available.

Simply put, it is unacceptable to put a person with a schizophrenic (as well as any other) pathopsychological symptom complex only on the basis of what he found in common with such concepts as “cat” and “apple” (a wonderful answer from one patient: “ bones inside ”).

But, unfortunately, many experts do just that: there is no guarantee that once you are in the system, you will not receive the label “schizophrenic” just because you give an answer that is not provided for in the manuals of the 50-70s or in the head of the specialist.

So, once I came across a conclusion in which a psychologist notes a violation of thinking on a schizophrenic type, referring to the fact that, by performing a technique to eliminate the superfluous, of the four images - a balloon, an airplane, a car and a steamer, the study excluded the balloon, justifying it by that this vehicle is outdated.

Common Thesaurus Problem


Comprehensive diagnostics is an important and even partly fashionable topic, but what they say about it sometimes puzzles me. So one of the domestic authorities in the field of diagnostics openly promotes the idea that the methods used in the diagnostic process should “ have a common thesaurus ”. And, surprise-surprise, this man builds his scientific work on the creation of a set of techniques using a single terminological apparatus.

The simple idea that throughout the civilized world it has long been learned to build a system of "translation" / mutual mapping of different concepts of personality (for example), just so that you can use different techniques based on different models to compare with each other and to build the very complex assessment, it is completely ignored.

And why? Yes, simply because if you take this into account, it will be very difficult to advertise your own, “unparalleled” system of methods. After all, its value, according to the author, just in a single thesaurus, which in fact no one really needs.

Take, for example, the problem of discrepancy between personality description models, indicated by this very authority: they say there are many ways to describe it (this is so), and different research methods are aimed at using different approaches to its description: someone talks about five factors, someone about accentuation, and someone generally uses a Myers-Briggs typology.

But even in the late 80s a study [2] was published, which shows that the five-factor model exceeds the Myers-Briggs typology. It also shows exactly which scales of the Five-factor model correlate with Myers-Briggs typologies, which gives (if desired) the possibility [not ideally accurate, but still] to display the data described within one model to another.

Accentuation models that are popular in Russia are such as the Leonhard classification and the Lichko classification that continues its classification, which is largely based on the classification of Gannushkin's psychopathy, are not very popular in the West.

Instead, they use some similar classifications, such as the McWilliams psychoanalytic diagnostic model, which does not have an unambiguous mapping to the Five Factor Model, they are different in nature [3] or the clinical scales of the Shelder and Westen questionnaire [4], which are completely related to The five-factor model [5].

But the most interesting thing is that the scales of the Shelder and Westen questionnaire correlate well with the psychoanalytic diagnostic model proposed by McWilliams [6]. And, of course, they correlate with the classification used in the DSM.

Why am I telling all this? To the fact that, firstly, the problem of the distinction of the used thesaurus is partially solved (sufficiently so that clinicians can use techniques based on different models of personality description for complex analysis), and, secondly, to the fact that in western scientific and clinical practice uses a more robust approach, aimed not at developing new unique methods and description models, but at exploring the relationship and the possibility of mutual mapping of already existing models (psychoanalytic, clinical from DSM, Five-factor oops).

Dual diagnosis, useless in practice


Initially it was assumed that experimental psychological research will be used in addition to the general clinical study of patients [11, p. 22] performed by a psychiatrist in order to solve problems of differential diagnosis, early diagnosis of pathological changes in the psyche, assess the effectiveness of therapy, etc.

In theory, this should lead to the fact that a person is checked by two different specialists, using two different sets of diagnostic tools, which, it seems, allows to ensure greater accuracy and reduce the likelihood of errors. Say, if one of the specialists makes a big mistake, his results will not coincide with the results of his colleague, they will discuss the patient and find an error, she will not go unnoticed.

In practice (insider, without proofs) the situation is not so rosy: the psychologist is a “ downstream ” specialist in relation to a psychiatrist and in some cases he simply adjusts his results to the diagnosis that the psychiatrist has set. But there are even worse situations when a psychiatrist just comes to a psychologist and asks what diagnosis to make to a particular patient (the psychologist has no formal right to make diagnoses, he was not taught this, but it happens).

In general, in the industry, at least in state structures, there is such a phenomenon as “the curse of the first diagnosis ”. The essence of it is very simple: many people don’t want to get involved in changing the patient’s diagnosis: if somewhere someone once found something schizophrenic in him, the next specialists (psychiatrists and psychologists) will most likely see this “schizo- component ”- regardless of whether the patient has something like that - simply because to say about his absence means either questioning the qualifications of the previous specialist or his own, as well as condemning himself to a bunch of paper work. Therefore, in practice it happens that the diagnostic results are simply “adjusted” to those obtained earlier.

Psychiatrists are not much better off: a change of diagnosis is a fairly hemorrhoidal procedure, and they try to avoid this, ensuring that the results of all examinations of the patient coincide. The error protection mechanism does not work.

Ignoring environment settings when testing


One of the most significant violations of the ideology of an integrated approach in diagnostics is the neglect by many psychologists of the parameters of the environment in which experimental psychological research is carried out.

The easiest way to explain is with the example of ADD (H) - a disease that, among other things, is characterized by a person’s inability to focus on the task, a high distractibility to extraneous stimuli. They take this person, lead a psychologist to a quiet room, a psychologist conducts research, a person passes it successfully, the psychologist writes that everything is fine.

But it would be worthwhile to quietly turn on the radio, and the results of this study would change radically: in such patients, the effectiveness in performance tests in the presence of the smallest distracting stimulus decreases very strongly [7]. But in practice, few psychologists bother with this; they usually conduct research in streaming mode in the silence of the office, after which they note positive dynamics, aha.

And certainly, even if someone conducts such research in the conditions of noisiness, it does not reflect this fact in its conclusion: the accepted form of writing conclusions based on the results of the pathopsychological experiment does not provide for a description of the parameters of the environment in which the testing was performed, although the noisiness may radically affect its results [7].

Of course, this problem manifests itself not only in the diagnosis of ADD (G), it is a general problem that applies to all the results of neuro / pathopsychological research.

Modern sources pay high attention to the standardization of both the testing procedure and the parameters of the environment in which it is carried out [8, par. 16.316]. In practice, these parameters are not simply not standardized, they are not even indicated in the text of the conclusion. But about standardization, we'll talk more below.

Ignoring the parameters of pharmacotherapy in pathopsychological testing
Many psychologists do not understand psychopharmacology, and this is terrible. Worse, no one points out in their conclusions what kind of pharmacological treatment the patient received at the time of the study. And a lot depends on it.

How can I know what caused the emotional flatness, marked by a colleague in conclusion - a disease or medications taken (here the trick is that a person who is far from psychopharmacology, even if he is a clinical psychologist, will find it difficult to understand whether the disease itself or the attempts her treatment).

Given that in world practice, the influence of drugs on the results of neuro / pathopsychological diagnostics has long been recognized [8, par. 16.271], our psychologists may not even be interested in what medicines the patient is taking. And as a result, for example, anxiety workers who have been prescribed high doses of neuroleptics (a practice that is not uncommon in the Russian Federation) become “schizophrenics” - simply because the psychologist does not understand why this person in front of him is retarded with flattened facial expressions, etc.

In an ideal world, neuro- / pathopsychological diagnostics should be performed “dry”, without drugs [29, p. 28] (or at least without neuroleptics), and in cases where this is not possible, it is necessary to indicate drugs with dosages in which a study took place and involve a psychiatrist (if the psychologist himself doesn’t understand how this pharma affects the estimated parameters) in order to identify the effects of the medicines taken by the patient and to separate the flies (drug effect) from cutlets (speech, thinking, etc.) .

Ignoring the parameters of the physiological state of the patient


Well, many psychologists do not understand pharma, or do not sufficiently take into account its influence on experimental data (although there is a corresponding course in clinical psychology training programs at medical universities, and the same NEI completely accepts psychologists for training). It is incomprehensible to me, but, sort of, formally, they should not be able to do this (although the principle “you don’t know how to do it, turn to someone who knows how” should be followed here). This is a stretch I can understand.

But when testing, other parameters of the examined person are also often not taken into account, which may affect the diagnostic results: pain [8, par. 16.281], fatigue (not painful asthenia, but ordinary human fatigue, when, for example, the patient is sent for diagnosis after the separation work) [8, par. 16.264], psycho-emotional state [8, par. 16.257], stress from the testing procedure itself [8, par. 16.292].

The lack of a systematic approach to diagnosis


In the last article I have devoted many beautiful words to the importance of a systematic approach in pathopsychological diagnostics: it is impossible to judge the patient as a whole on the basis of one or two signs.

What do we see in practice? Often, in the same psychological opinion one can see, for example, significant violations of thinking of the schizophrenic type and their absence, the formed motivation to undergo the study and its apparent insufficiency and other contradictions.

For some reason, some of my colleagues do not even try to form a single view, a single patient model, and if the methods give conflicting results, they do not run to understand and retest, but simply stupidly write these different results into different parts of the conclusion. Say, I was going to do something here, and you - sort it out yourself what that means. Furious.

Arbitrary interpretation of method results


The fairly low parameters of the validity and reliability of many of the techniques used in practice (we will examine specific examples in the next section, and there will be proofs as well) lead to the fact that interpretation of test results is often very subjective.

And this applies not only to projective tests [9, p.8], but also to seemingly standardized questionnaires, such as SMIL. Even if you read Sobchik herself [10, p. 5], you will see a huge bug of this test: “ quantitative indicators of the method are not absolute: they should be considered in a generalized set of data about the person being studied ”.

The phrase is good and correct, but it demonstrates, in particular, the fact that in practice even the same SMIL profile (test result) can be interpreted by different specialists in different ways. Again, the problem would be partially solved if the colleagues indicated in their conclusions the immediate results of passing the methodology (for which even a special short form of their record was developed), but often they do not indicate these results: only interpretation.

And SMIL here is not the worst. Any “pictograms” and other “comparison of concepts” provide much greater freedom of interpretation, not to mention the designs. This, of course, also applies to issues of validity and reliability, which we will discuss a little later, but here I would like to say to colleagues: please indicate not only your conclusions (which, surprise-surprise, may be wrong), but also those data basis of which you made them. A few more writings, but the final result is worth it.

Lack of standardization in the testing methods used


Not only that, every psychologist is free to use his own set of test methods (which, perhaps, is not so bad, as it allows the specialist to choose the best tools in each particular case), sometimes choosing completely unsuitable tools, as well as a single name for the methods in the industry simply does not exist.

Take, for example, the “Pictogram” technique (a modified Luria test for indirect memorization). I have met at least four sets of used concepts, and when I see in someone else's opinion the data obtained using this technique, I have no idea which particular set was used.

In defense of colleagues, I will say that if you want to specify the exact version of the technique used, the psychologist faces certain difficulties - simply because the standard reference catalog of the methods simply does not exist, and if the specialist wants to accurately specify it, he will have to write a long link to the source from , — , . This is just inconvenient, long, and doctors may not understand.

Uselessness of pathopsychological diagnostics


All these factors listed in the previous section, as well as many others, lead to the fact that in a number of cases neuro- / pathopsychological diagnostics becomes a completely useless procedure.

Don't get me wrong, I love diagnostics, incl. pathopsychological, and I think that it can be very useful with proper and appropriate use (I will also tell about it), but the fact remains that often it does not give anything to the doctor, the patient, or the psychologist.

Actually, this was the reason why I left the psychiatric hospital, and at that time I thought that the problem was only in the organization of the process in our hospital, but it turned out that in reality everything is not as it actually is: the area of ​​problems is much wider and extends far beyond my previous job.

Diagnosis under the drugs


I have already spoken about this, but here I want to reveal a little my subjective understanding of the problem. The fact is that the practical work of a psychologist completely avoids the need to diagnose people taking psychiatric drugs, it will not work: if the patient is in an acute condition, it is often necessary to remove this urgency, and only then deal with where it came from. This is normal and justified, it is even good that they are not trying to carry out a complicated and long diagnosis at first.

But there is one “but”: after a person has been started on drugs (especially on neuroleptics, but not only), the list of questions that a psychologist can answer, having conducted his research, is significantly narrowed. Yes, I can see how sustainably the patient’s attention is now, but I cannot make any intelligible assumptions as to why it is like this: because it is a characteristic of the person himself, or because it is the effect of the pills.

To be more exact, it is not even so: I can make assumptions, but this will be my subjective opinion, the techniques used in the domestic en masse diagnostics do not provide for a coherent procedure for differentiating the influence of drugs. And what is the use of the validity and reliability of the method, if in the end I decide at my discretion what data to take into account, relating them to the properties of the psyche of the researched and which ones to discard, writing off the pills.

And if in private practice I can refuse such a diagnosis, explaining its uselessness to the client, then in the hospital the psychologist has to carry out these deliberately unreliable diagnostics with a conveyor belt.

Again, of course, an experienced psychologist will not be greatly mistaken in this matter and it will be enough to take into account the influence of drugs correctly, but this will be a subjective opinion, to instrumental diagnostics, which one wants to brag about, this attitude will have a very mediocre.

Re-diagnosis formality


It takes me one and a half to four hours, and sometimes more, to conduct a correct pathopsychological study. This, together with the collection of anamnesis, the clinical conversation, in fact, the pathopsychological experiment and the explanation of the result to the client in a language understandable to a non-specialist.

In the hospital, a re-diagnosis (which, in theory, is carried out to determine the dynamics of the patient's condition) in some cases, the specialist spends about fifteen minutes.

Not because he is much more experienced than me, but because some psychologists (and even psychiatrists sin with this) relate to re-diagnosis formally. “ Schizophrenia is incurable, if there is F.2X, it makes no sense to watch it - just draw him a schizophrenic symptom complex and that's it!”- such is the logic of a significant number of repeated surveys.

I’m not even talking about the fact that these repeated examinations are often carried out not according to indications, but according to management considerations inaccessible to comprehension (all there are purely bureaucratic requirements for the frequency of psychological examination).

Uselessness for the patient


Having worked for several years in a normally organized process, I can say with confidence that the diagnosis that we performed in the hospital was useless for the patient.

The patient did not learn anything about himself, no one explained to him what results were obtained, they were not even voiced, since this would be an intervention in the treatment, and the doctors would not appreciate this step.

To say that this procedure improves compliance is also wrong. It is here when I explain to the client what I saw, why I believe that I saw exactly what I saw, how it relates to what diagnosis will be made, and what treatment the psychiatrist will prescribe, I see that the client’s willingness to cooperate increases .

And there, in the hospital, many patients were quite cool about the procedure of pathopsychological diagnostics: at best, skeptical, at worst - sharply negative. And I understand them.

Uselessness for the treatment process


But, maybe, this diagnosis was useful for the doctor (and for the patient - only indirectly)?

By no means.Our results went no further. The only thing that was required of us is that they coincide with what the psychiatrist needs. Some doctors insisted that we push our conclusions to fit their vision, while others simply asked us what diagnosis to make (yes, domestic psychiatry in ... well, in a deplorable state).

No one sat and corrected pharmacological regimens based on our results, no one prescribed or canceled psychotherapy, no one changed the sleep and rest regime - in general, in treating a patient when the bureaucratic stage of ensuring the “ uniformity of diagnoses ” ended , the impact of the results of our work it was near zero.

Lack of system education in the field of psychiatry and psychopharmacology for psychologists



Yes, you can study at the same NEI or read Stahl with Kaplan and Sadok (I recommend to those colleagues who haven’t yet), but this doesn’t replace a high-quality system of university or postgraduate education in these areas. Yes, there are a number of psychiatry courses for psychologists, but what I saw is fierce horror (except for NEI, of course).

And I do not understand how a person who does not know either psychiatry or psychopharmacology (a psychologist, even if he is called clinical ten times) can work with psychiatric patients. Yes, he always has the opportunity to learn everything by himself, to learn from psychiatrists, etc., but how many of them bother?

Unfortunately not.

Catastrophic obsolescence of stimulus material


Even if you do not find fault with the validity and reliability of the methods used (and we will certainly stick a little later), the blatant obsolescence of the stimulus material forces me to make a facepalm.

Let us take, for example, the method of indirect memorization according to Leontiev (not the essence of what it is): there are cards on which various objects are depicted. There is an ink pen [11, p. 85], but there is no smartphone . This is all you need to know about the conformity of the methods used to modern realities.

Yes, you can say: “ do not use Rubinstein, use more modern techniques ”. But where are they? Oh, in the Lezak tutorial? But they did not pass adaptation and approbation on the Russian-speaking audience (at least, not all).

No, everything is really sad.

Techniques


Ok, we discussed a bit of the problems that exist in neuro / pathopsychological diagnostics in general, let's talk a little about the techniques used. To begin with, we will define such concepts as validity and reliability.

Validity is a characteristic that shows how well the data obtained correspond to what we want to explore [12]. In other words, this is such a thing, in the presence of which we can be sure that we measured exactly what we wanted to measure.

Reliability- is the degree of stability of the methodology to measurement errors. Reliability is related to the repeatability of the result. For example, using the same measurement tool, we can, on the condition of its reliability, expect that its (measurement) results for the same object under the same conditions will be constant [12] (unless, of course, object characteristics have not changed).

Let's go over the main classes of methods used in pathopsychological diagnostics and see how they possess these qualities.

Projective tests


Let's start with projective tests with the least standardized and structured. Such tests as TAT, PAT, “Non-existent animal”, “House, man, tree” and others are projective techniques. The common point is that the subject must himself complement, interpret or develop the stimulus provided by the experimenter [1, p. 37].

The basis of these tests is the mechanism that Freud and Jung called “projection”. It is believed that by activating this mechanism, it is possible to “pull out” the contents of unconsciously studied subjects, repressed into the unconscious, attitudes, experiences, negative emotions, etc. [1, p. 37].

An interesting fact is that even in the sources themselves devoted to projective tests it is said about the “relatively low reliability of the results obtained, related to the subjectivity of interpretation”, and also that “ it is difficult to confirm with the scientific methods the reliability and validity of drawing techniques ” [9, p .eight]. From myself I will add that this applies not only to drawing tests, it is a general characteristic of the entire class of methods.

In a meta-analysis conducted back in the 2000th year, it was shown [13] that the Rorschach test, the thematic apperceptive test and the test known in Russian literature as “Figure of a person”, do not have sufficiently high validity. The authors recommend refraining from using these techniques in judicial and clinical practice, or at least limit themselves to a small number of interpretations that have at least some empirical evidence.

A more recent meta-analysis from 2013 [14] also reports a lack of validity of the Rorschach test. But this does not prevent the inclusion of this test in the list of “ main projective methods of pathopsychological diagnostics ” to some domestic authors [1, p. 38], lol.

Regarding the “Drawing of a non-existent animal” there is even a domestic study [15], in which
it was found that a number of interpretations of the “non-existent animal” pattern, which are described in the literature, were not confirmed. In particular, the presence of teeth, horns and claws is not associated with an integral indicator of aggressiveness, as determined by the Bass-Darki test.

In fairness, I want to note that in domestic sources there is an adequate assessment of the validity and reliability of projective techniques. So, for example, Lubovsky notes [16]:
, . , - … . -, .. .

With the “Figure of a Man” test, everything is not much better. A 2013 study showed that it should not be used to evaluate an intellectual study of children [17]. He didn’t justify himself either as a tool for determining the presence of sexual violence in a child’s biography [18], and also as a test for determining the level of cognitive development, social adaptation, personal characteristics of a child, it is not too good [19]. It is not suitable as a tool for the diagnosis / screening of behavioral disorders in children [20].

The Luscher test, popular in the domestic psychological community, also has no evidence of validity. In a study from 1984 [21], there was no significant correlation between the results of this test and the MMPI (standard object for comparing tests claiming to reveal personality traits), and its (test Luscher) popularity in psychological circles is perfectly explained by the Barnum effect.

I think proofs are enough. It is obvious to any psychologist that projective tests cannot have high reliability - simply because they allow a very high variability in the evaluation of the same drawings / stories of the subjects. Indeed, in some cases, the results (in the form of some conclusions and conclusions, rather than the drawings themselves) speak much more about the specialist who interprets them than about the researcher himself.

Questionnaires


Let's start with the “ gold standard ” of personal diagnostics in the Russian Federation - a standardized multifactor method of personality research (SMIL). This test is an adaptation of the widely known worldwide MMPI (Minnesota Multiphasic Personality Inventory) test created during World War II for the professional selection of military pilots [10, p. 3].

It is possible to talk a lot and for a long time about the validity and reliability of the MMPI itself, but the trick is that SMIL is not MMPI, and transferring the data obtained on one test to the other is incorrect. The SMIL training manual says that " statistical data processing and comparative analysis of the results of psychodiagnostic research with data of objective observation (sometimes - of many years) confirmed the reliability of the methodology"[10, p. 11]. Cool, cho.

Only, here, in the training manual [10] there is no section with used literature as a class - the bibliography is not indicated, and it’s not clear where to read about these very “ sometimes long-term studies ”.

In the manual on SMIL, at least something similar to the description of standardization is limited to this passage [10, p. 12]:
Translation of the questionnaire text was carried out with the help of qualified philologists who know well the subtleties of word usage and the construction of phrases. Translation improvement was carried out 9 (!) Times after the next approbation of the test on various contingents of the domestic population. The frequency of the normative answers of Americans was compared with the results of responses of a representative group consisting of 940 Russians.

There are no references to the relevant work, the names of “ qualified philologists ” are not indicated, descriptions of “ contingents of the domestic population ” are not given, the result of comparing the answers of Americans and Russians is not given.

In his other book, Sobchik is very self-critical [22, sect. “5.1. Individual typological questionnaire (ITO) ”] .:
It so happens that theories are created on the basis of rich imagination and are allegedly confirmed in several observations described by the authors. <...> Many psychological tests are created without a certain theoretical basis.

The funny thing is that I was not able to find in Gugloshkolyar the query “ Sobrik SMIL validity reliability ” not only the research of Sobchik herself, but at least some mention that they exist at all. And, it seems to me, it's not about the problems of Google’s search algorithm.

You can, of course, say that, say, MMPI is valid and reliable, and therefore SMIL based on it is also nifiga. We again open Sobchik [10, p. 12], where we read that “ some statements were changed <...> from the questionnaire 26 statements were highlighted that turned out to be ballast ”. No, after such bullying, it is impossible to extrapolate data on validity and reliability to SMIL.

I can say that to find evidence of validity and reliability (with the exception of the statements of Sobchik herself without any proof) was not possible not only for me, but also for the author of the work “ Techniques that kill science ” [23, p. 263]:
The above data indicate a lack of a theoretical approach to the development and adaptation of the methods of Lyudmila Nikolaevna Sobchik. At present, diagnostics are unsuitable for the construction of a psychological forecast and application in practical psychology. They need refinement and proper adaptation.

I especially liked the conclusion of the author of this work [23, p. 263]:
The popularity of the modified Sobchik tests, which are used in schools, universities, with professional selection, points to the lack of professionalism of editors who produce diagnostic books for psychologists, teachers, recommending these techniques to students, and the extremely poor state of psychology as a diagnostic tool in scientific research.

Harsh, but in the case.

But maybe SMIL is the only such problematic questionnaire popular in the Russian diagnostics? Let's check.Take, for example, the test RESIN (“Abbreviated Multiple Factor Questionnaire for the Study of Personality”), which was developed not by Sobchik, but by Zaitsev. This is an adapted version of the Mini-Mult test, which, in turn, is an abbreviated version of the same MMPI.

In the paper in which the RESIN was presented, the author writes [24]:
( 2000 ), .. , .. , .. ( ), .. ( ), .. ( ) . .

Well, fuck up, damn it! And what results did you compare with the results obtained using the methods? And then, maybe with an invalid and unreliable SMIL? Does not answer ©.

My attempts to find at least some data on the validity and reliability of RESIN in all the same Gugloshkolyar inquiries “ test mini-cartoon validity reliability ” and “ test RESIN validity reliability ” again did not bring results. And no, I didn’t find anything either in Cyberlenink, e-librari and other places. A lonely study [25] from 2014 with conclusions about at least the limited applicability of RESIN in schizophrenia - everything I was able to dig up on this topic.

“ What about ITO (individual psychological questionnaire)? ”A curious reader will ask me. “ Nothing"- I will answer him. Again, all the data about its validity and reliability that I managed to find is reduced to two quotes by the author of the method ”[22, Sec. “5.1. Individual Typological Questionnaire (ITO) "]:
During the period when my research in the field of personality psychology began, all work proceeded under a powerful fire of criticism aimed at tests in general and especially at attempts to deduce a person’s personality traits from his innate individual properties. In some ways, this situation has benefited. Had to meet the most stringent requirements for themselves, to work

AND
Over the past ten years, the ITO methodology has been widely used in the context of various research purposes in the clinic of borderline mental disorders, in studying the processes of personality deformation under the influence of adverse conditions or emotional “burning out” in the framework of various types of professional activity, for the purposes of personnel selection and vocational guidance. The results obtained confirm in practice the concept of the theory of leading trends.

According to the “ leading tendency theory, ” I would also be able to take a ride, and only the fact that this absolutely does not correspond to the stated topic of the post keeps me from this. In general, our diagnostic methods are based on the honest word of their author. And then - “ Why don't they like psychologists so much? "

But maybe we use the tools related to MMPI for nothing? Ok, let's look at the “Prediction-2” methodology (known as “NPU-2”). Rybnikov's original article, in which he presents it, is not freely available (well, or I haven’t found it). And ... and other data on validity and reliability - no.

I think that there will be enough questionnaires, many of them, let's move on to the next section.

Study of thinking


My favorite section. Quite often in my practice I have to investigate the thinking of clients for the presence / absence of violations of the schizophrenic type. This is quite an important part of the diagnostic process, on the basis of which decisions can be made by the psychiatrist regarding the patient's treatment strategy. Therefore, the requirements for the validity and reliability of the methods used in this area are extremely high.

Most of the domestic methods used in this field were developed in the 1920s and 1970s [26]. They, these very techniques, migrate from textbook to textbook, often not changing much with time. Consider them.

The first thing that many of my colleagues will remember when they talk about the diagnosis of thinking is the “ Pictogram". Originally proposed by A.R. Luria to study the ability to mediate memorization in the 60s (according to other data, it was proposed by Vygotsky [29, p. 105]), it was substantially improved by B.G. Kherson in the 80s [27, p. 5].

In my subjective opinion, the “Icon” of Kherson is the most developed version of this technique in the domestic pathopsychology, therefore, we will consider it. Kherson himself, in his work presenting this technique, says that “ in individual psychodiagnostics, the revealed changes are sometimes so obvious that they do not need to be measured and refined ” [27, p. 6]. It is strange to hear from a person who offers to standardize one of the most popular methods, but oh well.

In the same article, Kherson says that “The practice of applying the pictogram showed its special validity in the diagnosis of schizophrenia ”[27, p. 14], referring to the work of Rubinstein, Longinova, Bleicher, made in the 70s.

This is the only link to the validity study of the “Pictograms” in this article. In addition, the author compares the “Pictogram” with ... the Rorschach test [27, p. 83] and drawing tests [27, p. 91]. Cool objects for comparison, yes.

Let us now look at the works to which the author refers. I am somewhat embarrassed that more recent articles on the validity and reliability of the “Pictograms” could not be found, but even so, maybe there was so convincing evidence that it wasn’t necessary to recheck (ridiculous, especially given that (28) there is not a word about the validity of the methodology for studying the thinking of patients with schizophrenia).

The modern reprint of the work by Rubinstein, which Kherson refers to, does not contain a word about the validity of the methods described [11] (I could not find the original monograph of the 72nd year).

In the reprint of Bleicher’s work, the issue of validity is addressed [29, p. 26]; moreover, it says that “ validity in general”- an incorrect concept that validity should be assessed in relation to a specific task (a very sensible idea!).

However, references to studies in which the validity of the “Pictograms” would be demonstrated, even for some purposes, as well as descriptions of these studies, this manual does not contain. But it says that Kherson “ in terms of interpreting pictograms took the criteria close to those used in the Rorschach test (both of these techniques were used in parallel) ” [29, p. 107]. And with the Rorschach test, we sorted out a little higher.

Finally, let's take a guide to the clinical pathopsychological diagnosis of thinking behind the authorship of Kherson himself and see what he himself says about validity and reliability: “Traditional characteristics of standardized tests, in particular, various types of reliability and validity are not applicable to NIIM ”[30, p.44] (non-standardized methods for the study of thinking). Instead of these characteristics, he proposes to use the “ Range of methodology ”, “ Formalizability of the answer ”, “ Diagnostic value ”.

We will not analyze here what it is, and to what extent these characteristics are close to the generally accepted concepts of validity and reliability, but simply note the author’s progress - in the 80s he argued (without normal references) that the “Icon” is valid, at least for the diagnosis of schizophrenia , and after 2000 it already says that it cannot be valid in principle. Progress, a matter of respect (absolutely without sarcasm: the ability to change one’s opinion and recognize it is a rather rare thing in our academic community).

Generally, Kherson is a great fellow, he did a lot of useful things for pathopsychology, and I will definitely tell about his merits below. But the “pictogram” from this does not become valid and reliable.

Okay, with the icon figured out. Let's see what else we are offered to use: “Classification of objects”, “Exclusion of objects”, “Comparison and definition of concepts”, “Interpretation of proverbs, metaphors and phrases”, “Filling in the words missing in the text” and other methods [1, p. 35 ; 11, sect. 7].

Kherson classifies all these techniques to the class of NIMIM [30, pp. 42-43], and I agree with him on this. And for NIMIMov there is no point in trying to find evidence of vadidnosti and reliability. The review of the literature on the diagnosis of thinking [26] explicitly states that “ Russian-language techniques were created primarily in the 50-60s of the 20th century, they are effective in identifying violations of thinking as a stimulus material, easy
to use, but have not passed the procedure scientific evidence of their psychometric properties
. ”

Actually, what kind of validity and reliability can we talk about in the case of techniques, most of which have not passed the standardization procedure at all, and those that have passed, still allow subjectivity and [to a high degree] arbitrariness of interpretations of the results?

Intelligence tests


In the domestic psychodiagnostics, the work of K.M. Gurevich [31], written back in the 1980th year, but not lost its relevance so far. Noting that in modern versions the tests of intelligence (Binet-Stanford, Wexler, etc.) are distinguished by high reliability, the author openly says that
The situation that has developed in the intelligence testing is not satisfactory. After years of research, the very concept of intelligence remains obscure and theoretically confusing. There are no definitive explanations for the facts that are constantly repeated during testing, of significant differences, which are invariably found during the testing of samples differing by nationality, educational, cultural and economic status. There are psychologists who argue that these differences are caused by differences in intelligence itself among representatives of these groups. Others believe that the true reason is not the differences in intelligence, but in the nature of tests and testing. Testers themselves admit that "something is wrong in the Danish state."

I will allow myself another long quotation from the same work:
In developing the concept of thinking development tests, it will be necessary to revise the system of criterion assessments (standardization, reliability, validity). In particular, it is necessary to revise the traditional view that the results of psychological testing of large samples should supposedly be distributed along the Gauss curve, that is, normal. This view, obviously, has no serious grounds, and one cannot but agree with the criticism to which Hofmann subjects him. Without discussing the issue in its entirety, it should be noted that a normal distribution occurs when a large number of various factors act on a random variable and the share of the impact of each of them is equally small compared to their number. But in intelligence testing, a completely different picture emerges: distribution, along with many different factors, is also influenced by one powerful factor, the culture factor. The distribution of the test results will then depend on the proportions in which the individuals with different degrees of familiarization with this culture are represented in this sample, as reflected in the test; since the selection of subjects cannot be predicted in advance, it is impossible to say anything in advance about the nature of the distribution. It is possible that it will be possible to get a normal distribution once, but this will not be the rule, but the exception.

It is clear that the distribution, which differs from the normal one, confronts the psychologist with a number of difficulties; the main one is that there are no grounds for using parametric statistical methods. You may have to abandon this comparison with any criteria when the comparison is based on groupings by an immanent criterion, for example, by standard deviation. Obviously, it is necessary to switch to other ways of comparison, which, by the way, seem both more adequate and more modern (see, for example, [Popham WJ, 1978]).

It is impossible to be more satisfied with the prevailing understanding of the reliability criterion, according to which the test quality is the higher, the greater the coincidence between the first and second testing (test - retest). This criterion carries the idea of ​​the metaphysical immutability of the content of the psyche, does not allow the possibility of its development. A new understanding of the test comes from the fact that verbal-logical acquisitions imply the development of thinking. Obtaining a high reliability coefficient during retesting should most likely be perceived as a signal of trouble: either the test does not reflect the changes in the psyche, or such changes did not actually occur, and this indicates a pause in development, which cannot but be alarmed psychologist.

There is another interesting fact: the last version of the same Wechsler test, adapted in the Russian Federation, was released in 1992 [32], which calls into question the influence of the Flynn effect (the essence of the expected increase in the values ​​of IQ indicators over time). Indirect evidence that this effect has taken place can be found, for example, in the work of L. Baranskaya, who comes to the conclusion that it is necessary to renormalize the sample.

Supporters of the test Eysenk should be familiar with the work of V.A. Vasiliev [34], in which the author demonstrates the presence of gross errors in this test. I rechecked one of the examples he cited, which seemed to me the most blatant mistake: indeed, in the answers to the question of test number 8, which sounds “ Emphasize the superfluous word - Spain, Denmark, Germany, France, Italy, Finland ” [35, 146] Denmark with a note that it is the only kingdom among the listed countries [35, p.185]. But Spain is also a kingdom. [36] And these people are going to test us on IQ!

Testing procedure


In this section there will be no proofs, since I don’t know where to look for them, I didn’t come across any research on the real procedure of pathosychological testing in the conditions of domestic hospitals. But I had the opportunity to work in one of them and communicate with colleagues / patients from other institutions.

So, I want to say that even these imperfect methods are often not carried out or are not carried out completely due to a banal lack of time (when you are given only an hour per patient, you will not conduct Wexler or SMIL). And if you see a bunch of used techniques in the data of the pathopsychological conclusion, think about whether they could really be used in such a quantity.

Yes, it happens that the diagnosis is carried out by a sufficiently meticulous (and having a lot of time) specialist who really drives the subject through all these tests. But it often happens that the methodologies are not carried out completely (or not at all), and the experimenter describes their results on the basis of his subjective idea of ​​how a patient could pass this test.

The main problems in the domestic pathopsychological diagnosis


Of course, I can be overly subjective in my assessments here, and I certainly can’t bring proofs to authoritative sources for many statements, but nevertheless I would like to draw some conclusion from the foregoing.

The situation looks as if our pathopsychology, embarrassed to recognize the subjectivity and presumptive nature of its conclusions, is trying with all its might to impart to itself some kind of scientific intelligence. And the impression from this is quite repulsive.

Yes, the majority of our methods allow us to either subjectively evaluate the patient, or (which is no better at all) hide this subjectivity behind incomprehensible to the uninitiated and such plausible figures.

Who do we cheat? Not only patients and fellow psychiatrists. No, we systematically, for a long time, from generation to generation, deceive ourselves. A beginner’s specialist working with the same SMIL he will certainly hear about at a university will read a manual that states that he is valid and reliable. And, very likely, they will not go on digging further, but will sincerely believe the author - a well-deserved and respected specialist, favored by fame and honored with awards.

And even worse, after many years, without finding the time to check with the original sources, he will carry this opinion further - to the next generations of psychologists. And now - he himself has already experienced experience - the systematic repetition of the same clinical errors - broadcasts from the department about the validity, reliability and other positive qualities of these techniques. Isn't mythology shaped like that?

And already new generations of young specialists are graduated, trained according to his abstracts, and not all of them go to work at MacDac, some end up in mental hospitals and hospitals and, unknowingly (and this is scary!), Project their own unlived complexes in the form of such attractive, giving so serious and significant figures on their patients.

And they break their lives through ITU tools, professional selection, forensic examinations, and unscrupulous psychiatrists who do not want to think and rely on the data of pathopsychological diagnostics uncritically. Truly, the root “pathos” means in this case what the Greeks put into it: suffering.

Psychological diagnostics that brings suffering is what happens when we forget that all our judgments are no more than opinions, when, putting on a white coat, we begin to believe in our own infallibility when we divide the world into " we " and " they " - to specialists who are always right and normal, whatever that means, and to patients - crazy people, in which over time we risk unlearning how to see people.

And it becomes the beginning of the end - not only for “them,” but also for “us.” For there is no “we” and “they” here, and all professionals who work in the mental health industry must remember this.

And if psychiatry, especially Western, recognizes the presence and inevitability of the subjective component in the diagnostic process [37], trying to consistently get rid of it, recognizes the limitations (and even somewhere weaknesses) of the process of objectivizing the diagnosis [38], trying somehow with the eternal the subjectivity of the process of psychiatric diagnosis to somehow fight (not unsuccessfully, by the way), then our pathopsychology often simply denies its existence. This is sad.

What are we doing? Based on poorly standardized methods, we produce far-reaching conclusions (the most cunning of us say, “ assumptions ”) about the patient. And our fellow psychiatrists are doing the same. Yes, you can talk a lot about the fact that there are SCID´s and other wonderful things, but who among us / them uses them in practice?

You say that this is the level of our provincial psychiatric care? I do not believe - I had patients from the capitals. And I read their case histories. All the same, even in leading clinics and institutions. No, I, of course, cannot speak for the entire industry, and in this section I am rather emotional than objective, but I have the impression as an insider.

What to do?


Of course, the criticism, which does not carry within itself at least some grain, some germ of ways to solve the problems raised, is worthless. Realizing this simple fact, I will try to present some thoughts on what can be done. To my great regret, this section will be much shorter than all the previous ones, because I cannot complete this task alone. But I want to call on my colleagues, if not to solve it, then at least to recognize and recognize the problem.

From the academic community


Here, of course, it should be said about the need for [re] standardization of the tools used, [re] checking data on the validity and reliability of at least those tools to which these concepts apply, [re] adapting the best foreign techniques, etc.

But much more important is the rejection of the false illusions of objectivity, which many tools widely used in clinical practice give. No, seriously, if you look, then all books, monographs, textbooks and other materials on pathopsychological diagnostics refer to a limited amount (less than a dozen) of articles written in the last century.

Most of these works are not available to a wide range of specialists. It is possible that the university libraries have a limited edition of some work by the same Rubinstein, in which she convincingly proves the validity and reliability of the pathopsychological methods, maybe Sobchik somewhere published actual data about how / on whom / what tools did she recheck her SMIL, maybe there is an Aysenck test in secret bunkers that does not contain gross factual errors ...

Maybe dinosaurs are not extinct, the attic is inhabited by little drums, and on the back of the mirror is a portal to the country of pink ponies. Yes.

But what's the point of all this, if an ordinary clinical psychologist / psychiatrist does not have access to all this, if he is being fed with outdated, or even initially incorrect data?

I have no answer, only the utterly aching sadness, longing and hopelessness in my soul.

By the specialist


At the expert level, the most important, in my opinion, is the recognition of the problem. Yes, we put our symptom complexes based, ultimately, on subjective and arbitrary conclusions. This is true. Let's not forget that.

Let's use the best of what is available to us. Yes, Rubinstein's methods are presented in a completely unsuitable form, where it is said that “the second criterion on which the assessment of the performance of this task is based is the criterion of the adequacy of associations” [11, p. 143], while not really explaining which images are considered “ adequate ”, but we have long been a monograph of Kherson, which not only explains the essence of such concepts as“ adequacy ”,“ standard ”, but also provides a catalog of images previously classified according to the proposed criteria [30, Appendix 2].

And although the way in which these data were obtained remains incomprehensible to me personally (I could not find a detailed description of the process of obtaining them), the attempts to standardize the same “Pictograms” proposed in this work are a huge step forward. If we all use a unified approach to the evaluation of poorly formalized methods, if we use a single terminological apparatus, this will already solve some of the problems.

Let us use techniques that have good indicators of validity and reliability, for example, the same standard progressive Raven matrices [39, pp. 34–48] instead of those that do not have these indicators - at least where it is possible.

Let's abandon the use of obviously incorrect and outdated techniques.

Let's not be afraid to take responsibility for our subjective impressions and where our diagnosis has been formed, based primarily on them (and what is a free-form clinical conversation, no matter how a subjective impression is made), we will openly talk about it.

Let's carefully use SCIDs (SCID - Structured Clinical Interview for DSM), understanding that most of them are not tested on the Russian-speaking audience, but aware of their great practical value.

Let us recognize that the subjective component in our work occupies a very significant place, and the accuracy of our diagnosis depends on what we ourselves are; realizing this - we will improve our professional knowledge, incl. in psychiatry and psychopharmacology.

And let's not hide behind the numbers. Anyway, in the end, we do it badly.

By the patient


Patients and potentials want to warn that this is such garbage, we have, yes. And to say that in practice it is not so important what methods you are diagnosed, how important it is, who does it.

I can imagine that a good pathopsychologist / psychiatrist will give a more reliable result in diagnosis using a deck of playing cards than a bad one, using the most advanced pathopsychological methods available. Search not methods, but the person.

Foreign experience


Unfortunately, I do not have personal experience in the system of psychiatric care in developed countries, so my knowledge in this area is purely speculative. If there are more experts aware of the real state of affairs among the readers, I will be happy to hear their comments / clarifications / refutations.

As far as I know, in the West there is no such hard division into neuro-and pathopsychology as in ours. For example, in the well-known psychiatry manual, authored by Kaplan and Sadok, the term “pathopsychology” is never used [40], nor is it in the Lezak textbook [8]. A search for this query in the Pubmed database yields 34 results, a significant part of which are links to the abstracts of domestic journals.

Therefore, I suggest not to particularly bother with what the diagnostic methods are called and what direction they belong to, but just to give them a brief overview.

It should be noted that the same Kaplan and Sadok manual pays attention to the importance of matching the level of the specialist conducting the assessment procedure to the instrument used, it is emphasized that the less structured the technique, the more qualified the conductor should be [40, p. 2726].

First of all, it should be noted that Western colleagues love various clinical “psychiatric” scales - such as SANS - Negative Symptom Rating Scale, SAPS - Positive Symptom Rating Scale, PANSS - Positive and Negative Syndrome Assessment Scale, BDI-II - Beck and Depression Depression Scale etc.

Western colleagues use such well-known methods of diagnostics of thinking, as the interpretation of proverbs [8, par. 27.13] (here it should be noted that they have a formalized and standardized version of this test [8, par. 27.17), a generalization of the concepts [8, par. 27.29], establishing a logical relationship between the concept (in the spirit of the work of Luria) [8, par. 27.43], the Stanford Binet scale [8, par. 27.44], color progressive matrices of Raven [8, par. 27.106], Vygotsky's test for the grouping of objects [8, par. 27.136] and some others.

Of course, they also have other, less well-known tools, such as the Halstead Category Test [8, par. 27.51], Brixton's Spatial Expectancy Test - The Brixton Spatial Anticipation Test [8, par. 27.74], guessing a given word - the “Twenty Questions” task (a task in which the experimenter suggests a certain word, and the subject has 20 questions, which can only be answered with “yes” or “no”, guess the word) [8, par. 27.78] and a more formalized version of this assignment - Identification of Common Objects [8, par. 27.82], test the definition of the meaning of words from the context - Word Context Test: D-KEFS [8, par. 27.228], etc.

An interesting point is the concept of “re-sorting” of objects: the patient first groups the objects, and then he is asked to group them according to another feature [8, par. 27.138] (we are usually limited to one group, although the same Kherson recommends something like this).

They use projective tests [8, par.27.152], but with one important difference: in their guidelines for conducting such tests, much more attention is paid to the importance of including the results of such techniques in the general context of the study, the inadmissibility of unambiguous conclusions based on the data obtained, sequential modeling and refinement of the hypothetical model of the patient’s psyche and other “ideological "Things.

Personally, it would be most interesting for me to deal with such a tool as the MATRICS Consensus Cognitive Battery (MCCB) [8, par. 29.295] is such a battery for assessing cognitive functions in schizophrenia, especially since it has a version in Russian [42].

Another very interesting tool for me is the SWAP-200 test (The Shedler-Westen Assessment Procedure) - a great tool for personality research, producing results not only with reference to DSM, but also within the framework of understandable clinicians (regardless of their theoretical orientation) concepts [four].

In general, I have the impression that, with the exception of some very narrow areas (of the same standardization of the “Pictograms” made by Kherson), we are 30, if not more, behind them. They have the best tools available to us, they have much more often re-standardization, many cool things like Wechsler are available to them more fully (just because we don’t have everything translated), and at the same time have excellent tools about which we are generally little known.

It is sad. I'm sorry, Bluma Vulfovna, we're all ...

Why are we doing this at all?


This section will also be extremely subjective, because I do not know what objective data can be given in response to the question in the title.

Yes, our tools are not perfect. But in the industry of mental health in general, everything is quite imperfect. My personal opinion is that psychiatry (together with related disciplines such as pathopsychology) is such a proto-science, something like alchemy or natural philosophy with the only difference that its “prenatal” period fell into an era when the Scientific Method it is formed and its use is fulfilled.

Such common and, I would say, mandatory diagnostic methods, such as clinical conversation (not the one that is on SCIDs, but ordinary), observation, taking history from relatives and others like that, can also hardly boast high indicators of validity and reliability due to obvious reasons. But none of the colleagues in their right mind would argue that they are useless.

It is the same with pathopsychological methods and patho- / neuropsychology in general. If you don’t ask, if you remember that the techniques can “lie”, if you build an experiment so that the same characteristics of the patient’s psyche are studied by different methods, if you don’t forget the same methods of clinical conversation and observation, then the data from a psychopathological study useful as a doctor and patient.

When does it make sense to conduct a pathopsychological diagnosis so that it is not a mere waste of time and resources?

Personally, I see several scenarios. First , when there are some regulatory documents that prescribe it to go through. Despite the senselessness of such regulations in a number of cases, it often happens that it is impossible to avoid this procedure. Of course, I mean, first of all, different examinations - ITU, military and judicial, etc.

Sometimes in the process of outpatient treatment, the doctor prescribes the passage of the procedure of pathopsychological examination. Sometimes the patient passes it, and then wants to double-check the results from another specialist. There is nothing special to comment on - there are some external requirements to go through it, there is a need to comply with these requirements => welcome to the diagnosis.

The second variant of a meaningful appeal to a pathopsychologist is a different kind of differential diagnosis. Sometimes it happens that with “psychiatric” methods it is difficult to understand what is happening with the patient. For example, when you need to sort out a patient’s schizophrenia or an organic schizophrenic-like disorder.

I do not have hard proofs that would confirm the usefulness of pathology in this case (not to refer to Rubinstein!), But all my experience and experience of fellow psychiatrists shows that it can be useful here. In these most invalid and unreliable techniques, an experienced clinician will see quite obvious differences and be able to distinguish one from the other.

Yes, pretty much it will be subjective. But in some cases this is the only way to do it somehow, and the clinical result will be better than not doing it at all.

The third option is “ thin”Selection of psychopharma and quality assessment of treatment. There is a huge difference between the tasks “to cure a patient” and “solve a person’s problems”. A huge number of articles confirming the effectiveness of different kinds of drugs usually indicates that they allow to solve the first problem. But I haven’t yet seen any truly highly functional and happy schizophrenic on first-generation antipsychotics, for example.

How can a pathopsychologist help? No, he has no right to appoint a farm. But together with a psychiatrist, they can understand what changes to the scheme, for example, so that a person can again become effective in solving intellectual problems. Pathopsychological diagnostics will allow to establish here exactly what problems the patient has in the field of thinking, memory and attention, and the psychiatrist based on this information will be able to more accurately select drugs. Or assess the impact of previously selected.

The fourth option is when a person simply “likes to pass testiki”. Actually, why not.

Some methods, not being valid and reliable, help to establish contact with the patient - the same “Drawing of a non-existent animal” or TAT is perfect for this purpose. And then, within the framework of the established contact, you can collect a lot of valuable information by the method of clinical conversation.

The main thing to remember is that we build our models on the basis of inaccurate with high probability data and use the principles of constructing complex systems known to everyone from obviously unreliable elements: duplication, cross-checks, reconciliations, etc.

Yes, there is a lot of subjectivity in this. But a good specialist can deal with it and squeeze out of all this something useful for his patient.

UPD. 04/12/19 / FAQ


First of all, I want to express gratitude to the Habr audience for the questions asked and an interesting discussion. I decided to make a certain addition to the main article and collect answers to some questions in it in order to save readers from the need to search for a long time in the comments.

Why is it on Habré?


Habr was and, as far as I know, remains a self-regulating community, which means that materials that are not needed by the audience, quickly go into the cons with the authors' karma. It was established empirically that this is not the case (see the article’s rating), which means that someone needs it.

Moreover, the number of comments (even minus those in which they ask me why I published it here) is quite large, which can also hint at a certain interest of the audience in publishing. The UFO also did not cut out this article, so I believe that it is quite right to hang here for a while.

Why Habr, and not any portal for psychologists?


For several reasons. Firstly, because the first article was published here, in which it was stated that everything is quite good with our diagnostics. And I believe that I should give a refutation on the same resource, simply out of respect for its readers.

Secondly, I would like to convey information about the real state of affairs in the industry to the widest possible range of readers, so that people who apply for pathopsychological diagnostics can consciously make a decision about whether they really need it, taking into account all of its shortcomings.

Thirdly, I want my colleagues to come across as often as possible with the questions of their clients / patients about how correctly to draw any conclusions based on the tools they use. I have a naive hope that it will make them somehow move in the direction of greater scientific validity of their activities.

Fourth, I believe that the discussion here will be more substantive than on the “psychological” resources (value judgment).

Are you afraid that they will criticize you on a professional resource, and therefore you have come to the portal for IT specialists, where no one understands the subject?


By no means.After some rather short time, I will try to publish this text on all near-psychological sites available to me. Just do not want to send immediately to all available resources, because I can simply not cope with the flow of comments.

Moreover, I will try to replicate this post and on resources that are not directly related to psychology, simply in order to convey the stated ideas to the widest possible audience.

As for possible criticism from the professional community, I will be very happy to receive it in the correct form - with proofs and / or rationale. Comments like “ all your psychology is garbage ”, “ I'm a good specialist, you're lying ” and “ everybody knows about it “I'm not too interested because of their triviality.

Why are you taking out the dirty linen in public? This should be discussed in a narrow circle of specialists!


Because the majority of these professional ministers do not lead to anything: confidential information does not go beyond the discussion, the general audience does not know about the problems and continues to carry money to us, thinking that we use relevant, valid and reliable tools, and this is not so. No, I believe that my clients have the right to know about the limitations and problems of my work and consciously make decisions about whether they need this service at all.

All your psychology is bullshit, do you know about this? Here is psychiatry / psychopharmacology / animal husbandry - yes, strength!


Yes, there are a lot of problems in psychology, and we try not to hide them. hdablin recently wrote a rather emotional (cautious, obscene vocabulary!) post on psychological counseling (I’ll provide a link to Google Docs so that they don’t count it as advertising).

Why do you waste time inventing a bicycle? Everything has long been said ...


The Empatolog comment prompted me to this question , for which I thank her so much. As a material for acquaintance, she recommends several articles [44, 45, 46, 47] N. Baturina, as well as acquaintance with his other works on Cyberleanin. Well, I will try to answer deployed. With many theses of the first proposed article [44] for consideration, I agree. Yes, and how not to agree with such theses as the statement that “ psychodiagnostics, in the opinion of many authoritative domestic psychologists, is experiencing a serious crisis ” or the conclusion that “ many of the problems of psychodiagnostics are systemic in nature, which must be solved by psychology as a whole, so as not to lose their title of science . ” The author says about “



the almost complete absence in Russia of professional developers of psychodiagnostic methods . ” I would add to this the fact that those of them do not often provide any convincing evidence of the validity and reliability of their developments (see the section on SMIL).

The author calls the second large-scale problem “a very small number of domestic psychodiagnostic methods. Such techniques that could compete on an equal footing with well-known modern foreign techniques . ”

Agreeing with the very content of this thesis, I do not consider this fact as a problem per se: for me, as a practicing psychologist, there is no difference whether a valid and reliable, properly adapted method has been developed in the Russian Federation or outside of it. It seems to me that in this section the author turns into a discussion of politics, and this topic is not only of no interest to me personally, but also has no direct relation to pathopsychological diagnostics.

Further, the author notes that foreign methods used in domestic practice are “ not always translated well, sometimes it is not known by whom ”. This is in excellent agreement with what is said in the article, for example, about the adaptation of the Wechsler test. So far, our opinions on the main points are the same. Further, the author makes a generally wonderful (without sarcasm) conclusion: “In principle, it is high time for us to abandon these techniques. What they measure is not known, and besides, they are outdated for a long time ”. Wonderful! Fully supported.

Go ahead.The author says that " apart from a few tests, we do not know about many others, and even more so about the latest developments ." And again this perfectly coincides with what I wanted to convey in my article.

Only there is a small difference: Habr reads a much larger number of people than the “Bulletin of the South Ural State University”, which means that posting an article here is justified, if only because a greater number of potential clients / patients learn about the problem.

As the third cause of the crisis, the author calls “a low psychodiagnostic and especially psychometric culture of our psychologists-test users ”. This is about the question that “ everyone already knows everything about these problems in the professional community". Not all and not all.

The fourth reason for the crisis, the author calls "a small number of high-quality domestic textbooks on psychodiagnostics ." KDPV perfectly illustrates this thesis, with which I fully agree. I have only one remark: these very “quality” textbooks do not have to be native: a translation of Lezak [8] would have solved many problems in the training of psychologists-diagnosticians. But, apparently, it is easier for all interested specialists to learn the language independently, than to organize the translation of actual textbooks and, especially, their development in the Russian Federation.

The author notes that “the content of the textbooks repeats each other, the material proposed for study remains somewhere at the level of the 70-80s of the last century"(The author does not cite the references, but in my review I showed that, perhaps, we are talking about even more ancient times).

Thus, on the whole, with the exception of certain “political aspects” and insignificant nuances in the context of the topic raised, I agree with the author. But let's look at the details again. The author highlights the textbook L.F. Burlachuka [48], arguing that this is “a full-fledged scientific monograph of good quality”. What does the author of this work offer us?

On page 79, the author of this “good quality monograph” states:

.
, . <...> , , , , .

We have considered the projective tests above, I do not want to repeat. What does the author offer us as tools for the study of thinking? Nothing. In general, there are indeed many points worth mentioning in the work (for example, the fact that not a word is said about the SMIL test causes respect), but they do not refer to the narrow topic of pathopsychological diagnostics: work, whatever you call it, - “ ”Or“ monograph ”gives the practicing pathopsychologist a valid and reliable toolkit for conducting pathopsychological diagnostics.

The second proposed article [45] is not relevant to the subject of discussion, since it deals with the development of tests (rather than using psychodiagnostic) in practice. But in it, the author openly says that “ only 25% of domestic methods have at least a mention of checking validity, reliability and standardization ”.

In the third article [46] proposed for consideration, according to the author, “the signs of the emerging crisis overcoming are highlighted ”. Let's try to look at these very signs and find their manifestations in the real work of practitioners.

For example, we note the fact that, according to the author, “ only 7% of the methods have been tested for reliability and validity ”. Well, overcame the crisis, do not say anything. Further, the author points out that “the problem remains unresolved <...> - dozens of outdated foreign tests continue to spread in Russia ”.

Among the arguments about the negative impact of representative offices of Western publishers and companies specializing in diagnostics, on the domestic culture and practice of psychological testing in the spirit of the 30s, the author cites specific plans: “ in the near future a special information site will be created: www. info.psytest.ru , which will contain basic information about domestic and adapted methods . ” Let me remind you that these plans were plans at the time of publication of the article in 2010. Let's see what came of it.

When trying to follow the link “ View Compendium Techniques ” we see excellent and very informative for a practicing psychologist information about the error of the DBMS: “ Microsoft JET Database Engine error '80004005' ”. Well, everything, now the problems of vadidnosti and reliability are exactly solved, with such knowledge.

And no, I tried several times to go to the specified address from different devices and IP addresses: it is the same. The article states that “the search for the necessary information will be largely withdrawn .” Unfortunately, the author was wrong in the forecast.

This is followed by arguments about the need to certify everything and everyone - tests, psychologists, etc. But for me personally, this undertaking causes deep skepticism: it’s not very likely that there will be any sense in addition to replenishing the budgets of certification organizations (but this is just my opinion, and I would very much like to be mistaken in this matter).

The author of the article speaks out against the open publication of tests, which, in my opinion, is a very bad tendency: in essence, this is security through obscurity , which cannot lead to anything good.

In general, the article is more “political” than a “pathopsychological” character and certainly does not contain any indication that the problems indicated in the text of my post have been overcome, at least as of the moment of its publication.

The last proposed work [47] is devoted to the consideration of the issue of the “innovative potential of the organization” and has no relation to the problems of pathopsychological diagnostics in the Russian Federation.

However, the author of the commentary, on the basis of which this section was compiled, invites us to familiarize ourselves with the list of works of Baturin available on Cyberlenink. Let's do it.

The first interest is the article “ On the second volume of the Yearbook of professional reviews and reviews of psychodiagnostic methods ” [49], which provides information on the review of fifteen methods. Of these, the following methods are relevant to the pathopsychological diagnosis of adults (and my post is about her):

1. The Test of Emotional Intelligence by J. Mayer - “ gives the impression of an original and promising, but as yet raw instrument that needs improvement ”;

2. Standardized intellectual potential test - “ reviewers give some recommendations for improving the psychometric characteristics of the methodology: the need to strengthen data on criteria-based validity, add data on discriminativeness, construct validity, the relationship of individual components of the test and data on their contribution to the overall indicator. In addition, it is necessary to standardize the sample, substantiate test interpretations and provide a detailed description of the construct underlying the test . ”

Two methods on the topic, none of which can not be used in real clinical practice.

In the description of the compendium of psychodiagnostic methods of Russia [50], the author says that “in the majority of publications suppressed with a description of the methods, there is no data on their psychometric testing, there is no information even about attempts to test the methods for validity and reliability ”.

I couldn’t find any other work related to pathopsychological diagnostics.

Conclusion: yes, of course, I am not the first to talk about problems in the national pathopsychological diagnosis. I do not claim this. I just want to convey to the widest possible range of stakeholders (including patients and potentials) information that in this area everything is pretty bad, even now, in 2019.

Literature


Bibliography
1. Basics of pathopsychology. Textbook ed. Professor S. L. Solovyov. - M .: World of Science, 2018.– ISBN 978-5-9500229-1-3

2. R. McCrae, Robert & Costa, Paul. (1989). Reinterpreting the Myers-Briggs Type Indicator From the Perspective of the Five-Factor Model of Personality. Journal of personality. 57. 17-40. 10.1111 / j.1467-6494.1989.tb00759.x.

3.McWilliams, N. (2012). Beyond Traits: Personality as Intersubjective Themes. Journal of Personality Assessment, 94 (6), 563–570. doi: 10.1080 / 00223891.2012.711790

4. Shedler, J., & Westen, D. (2007). The Shedler – Westen Assessment Procedure (SWAP): Making Personality Diagnosis Clinically Meaningful. Journal of Personality Assessment, 89 (1), 41–55. doi: 10.1080 / 00223890701357092

5. Mullins-Sweatt, S., & Widiger, TA (2007). The Shedler and Westen Assessment Journal of Abnormal Psychology, 116 (3), 618–623. doi: 10.1037 / 0021-843x.116.3.618

6. Shedler, J. (2002). A New Language for Psychoanalytic Diagnosis. Journal of the American Psychoanalytic Association, 50 (2), 429–456. doi: 10.1177 / 00030651020500022201

7. David W Goodman, MD, FAPA, Diagnosis and Treatment of ADHD: Focus on the Evidence. Speech at the NEI Congress, 2015.

8. Lezak, Muriel D. Neuropsychological assessment. Oxford New York: Oxford University Press, 2012.

9. Wenger A.L. Psychological drawing tests: Illustrated manual. - M .: Vlados-Press, 2003. - 160 p: il.

10. L.N. Sobchik. “Standardized multifactor method of personality research”

11. Rubinstein S. Ya. R 82 Experimental methods of pathopsychology. - M .: ZAO Publishing House EKSMO-Press, 1999. - 448 p. (Series "World of Psychology").

12. Taherdoost, H. (2016). Validity and Reliability of Research Instrument; How to Test the Validation of a Questionnaire / Survey in a Research. SSRN Electronic Journal. doi: 10.2139 / ssrn.3205040

13. Lilienfeld, SO, Wood, JM, & Garb, HN (2000). The Scientific Status of the Projective Techniques. Psychological Science in the Public Interest, 1 (2), 27–66. doi: 10.1111 / 1529-1006.002

14. Mihura, JL, Meyer, GJ, Dumitrascu, N., & Bombel, G. (2013). Rorschach variables: Systematic reviews of the comprehensive system. Psychological Bulletin, 139 (3), 548–605. doi: 10.1037 / a0029406

15. Scherbatykh Yu.V., & Ermolenko P.I. (2016). Assessment of the validity of the projective test "Drawing of a non-existent animal." Journal of Pedagogy and Psychology of Southern Siberia, (4), 118-125.

16. Lubovsky V. I. Methodological issues in the diagnosis of mental developmental disorders // Interuniversity collection of scientific articles: “Actual problems of psychodiagnostics of persons with disabilities”. M .: 2011. P. 4–7.

17. Imuta K, Scarf D, Pharo H, Hayne H (2013) Projective Measure of Intelligence. PLOS ONE 8 (3): e58991.

18. Simon D. Williams, Harriet MacMillan, Child Abuse & Neglect, Volume 29, Issue 6, 2005, Pages 701-713, ISSN 0145-2134

19. J. Ter Laak, M. De Goede, A. Aleva & P. ​​Van Rijswijk (2005) The Journal of Genetic Psychology, 166: 1 , 77-93, DOI: 10.3200 / GNTP.166.1.77-93

20. Chollat, C., Joly, A., Houivet, E., Bénichou, J., & Marret, S. (2019). School-age human-body drawing test to detect behavioral and cognitive disorders. Archives de Pédiatrie. doi: 10.1016 / j.arcped.2019.02.015

21. Holmes CB, Wurtz PJ, Waln RF, Dungan DS, Joseph CA. Relationship between the
Luscher Color Test and the MMPI. J Clin Psychol. 1984 Jan; 40 (1): 126-8. Pubmed
PMID: 6746918.

22. Sobchik L.N., Psychodiagnostics in medicine, Moscow: Company BORGES, 2007, ISBN 978-5-91482-001-2, 416 pages, cover, 7010016.

23. EVOLUTION OF MODERN SCIENCE: a collection of articles by the International
scientific - practical conference (February 18, 2017, Ufa). At 2 h. 1. / - Ufa: MTSII OMEGA Saints, 2017. - 291 p. ISBN 978-5-906924-48-3 part 1

24. Zaitsev V.P. Option psychological test Mini-Mult // Psychological Journal. - 1981. - â„– 3. - p. 118-123

25. Khudyakova, Yu. Yu. (2014). The problem of the validity of standardized questionnaires in the study of individual psychological characteristics of patients with schizophrenia. Bulletin of the Kostroma State University. Series: Pedagogy. Psychology. Sociokinetics, 20 (1), 99-101.

26. Demyanova, L.V. (2014). Methodological problems in assessing impaired thinking in schizophrenia (literature review). Journal of Grodno State Medical University, (4 (48)), 16-20.

27. Kherson, B.G. The method of pictograms in psychodiagnostics of mental diseases .: - K .: Health; 1988. - 104 p., Ill., 0.26 l. Il - (Bp. Pract. Doctor) - ISBN 5-311-00071-6

28. Longinova, S.V. The study of thinking of patients with schizophrenia using the pictogram method / S.V. Longinova. - Moscow // Pathopsychology: anthology / comp. N.L. Belopolskaya. - Moscow: URAO Publishing House, 1998. - p. 96-108.

29. Blacher V.M., Kruk I.V., Lateral S.N. B68 Clinical Pathopsychology: A Guide for Physicians and Clinical Psychologists. - M .: Publishing House of the Moscow Psychological and Social Institute; Voronezh: Publishing house NPO “MODEK”, 2002.- 512 p. (Library of Psychologist series).

30. B.G. Kherson. “Clinical psychodiagnostics of thinking” - M.:, Meaning, 2014.

31. Gurevich K.M. Tests of intelligence in psychology // Questions of psychology. 1980. â„– 2. S. 53-64.

32. Vladimirova Svetlana Gennadievna (2016). The scale of David Wechsler: the present and the future in solving the problem of measuring intelligence. Yaroslavl Pedagogical Gazette, (2), 122-126.

33. Baranskaya L.T. Peculiarities of psychodiagnostics of intelligence using the D. Wechsler scale in various age groups of secondary school students / L.T. Baranskaya, O.S. Chalikova // Psychological Bulletin of the Ural State University. Issue 2. - Ekaterinburg: Publishing house "Bank of cultural information", 2001. - p. 92-98.

34. V.A. Vasiliev. The best IQ test. www.psychologos.ru/articles/view/samyy-luchshiy-IQ-test

35. Eysenck G., Classical IQ tests / Hans Eysenck; [trans. with impudent To Savelyev]. - M .: Eksmo, 2011. - 192 p.

36. www.britannica.com/place/Spain

37. Pies R. (2007). How "objective" are psychiatric diagnoses ?: (guess again). Psychiatry (Edgmont (Pa .: Township)), 4 (10), 18–22.

38. Yakeley, J., Hale, R., Johnston, J., Kirtchuk, G., & Shoenberg, P. (2014). Psychiatry, subjectivity and emotion - deepening the medical model. The Psychiatric Bulletin, 38 (3), 97-101. doi: 10.1192 / pb.bp.113.045260

39. J. K. Raven, J. K. Hort, J. Raven. Guide to the Progressive Matrices of Raven and Vocabulary Scales. Section 3. Standard Progressive Matrices (including Parallel and Plus Versions: Trans. From English - M .: “Kogito-Center”, 2012. - 144 p. ISBN: 978—5—89353—355—2

40. Sadock, Benjamin J., Virginia A. Sadock, and Pedro Ruiz. Kaplan & Sadock's comprehensive textbook of psychiatry. Philadelphia: Wolters Kluwer, 2017. Print.

41. www.columbiapsychiatry.org/research/research-labs/diagnostic-and-assessment-lab/structured-clinical-interview-dsm-disorders-11

42. www.parinc.com/Products/Pkey/225

43. N.T. Kolesnik Pathopsychological diagnosis: a textbook for academic bachelor / N. T. Kolesnik, E. A. Orlova: ed. G. I. Efremova. - M .: Yurait Publishing House, 2017. - 240 p. - Series: Bachelor. Academic course. Module. ISBN: 978-5-9916-9643-2.

44. Baturin, N. A. (2008). Modern psychodiagnostics of Russia. Bulletin of the South Ural State University. Series: Psychology, (32 (132)), 4-9.

45. Baturin, N. A., & Melnikova, N. N. (2009). Test development technology: part I. Bulletin of the South Ural State University. Series: Psychology, (30 (163)), 4-14.

46. ​​Baturin, N. A. (2010). Modern psychodiagnostics of Russia: overcoming the crisis and solving new problems. Bulletin of the South Ural State University. Series: Psychology, (40 (216)), 4-12.

47. Baturin, N. A., Kim, T. D., & Naumenko, A. S. (2011). Psychological aspects of the organization's innovative potential: determining factors and diagnostic tools. Bulletin of the South Ural State University. Series: Psychology, (18 (235)), 38-47.

48. Burlachuk LF B91 Psychodiagnostics: Textbook for universities. - SPb .: Peter,
2006. - 351 seconds: il. - (Series "Textbook of the new century"). ISBN 5-94723-045-3

49. Baturin, N. A., & Yusupova, Yu. L. (2014). About the second volume of the Yearbook of professional reviews and psychodiagnostic methods reviews. Bulletin of the South Ural State University. Series: Psychology, 7 (3), 116-121.

50. Baturin, N. A., & Pichugova, A. V. (2008). Compendium of psychodiagnostic methods of Russia: description and primary analysis. Bulletin of the South Ural State University. Series: Psychology, (31 (131)), 63-68.

Source: https://habr.com/ru/post/447056/


All Articles