In conducting the study, great attention is paid to data collection, therefore, when the respondents' answers are collected, they are a priori accepted as correct, and the report based on such answers is objective. However, situations often arise when a more detailed examination of individual answers reveals clear misunderstandings by the respondents of the wording of the survey or instructions to the questions.
1. Misunderstanding of professional terms or certain words. When compiling the survey, it is worth considering for which groups of respondents it is intended: the age and status of survey participants, whether they live in large cities or remote villages, etc. It is wise to use special terms and different slang - not all respondents can understand it or understand it all equally. Yet often such a misunderstanding does not cause the respondent to quit the survey (which, of course, would be undesirable), and he responds at random (which is even more undesirable in view of the distortion of the data).
2. Misunderstanding of the issue. Many researchers are convinced that each respondent has an unambiguous and clearly formulated opinion on each issue. This is not true. Sometimes the participants of the survey are difficult to answer the question, since they never thought about the subject as a whole or about the subject from this angle. This complexity may cause the respondent to quit the survey, or answer absolutely not informative. Help survey participants to answer by better formulating the question and offering a variety of answers.
Source: news.sportbox.ru')
3. Failure to understand the instructions for the survey or individual issues. Like the entire text of the questionnaire, the wording of instructions should be adapted for all groups of prospective respondents. Try to avoid a large number of questions where you need to mark a specific number of answers (“Mark the three most important ...”) or in all such questions determine the same number of answers that should be noted. It should also reduce the complex types of questions (matrices, ranking, etc.), replacing them with simpler ones. If you believe that respondents can answer the questionnaire from a mobile phone, try to further simplify the survey structure.
4. Misunderstanding of the rating scale. Using a rating scale in the questionnaire, explain to respondents its meaning even if it seems obvious to you. For example, the usual scale from 1 to 5 is usually understood by analogy with the school grading system, however, sometimes respondents mark "1", attributing the value of the first place to it. In verbal scales, it is better to avoid subjective criteria. For example, the “never — rarely — sometimes — often” scale is very subjective. Instead, it is worth proposing specific values (“once a month”, etc.).
5. Generally positive and average estimates. The tendency of respondents to generalized-positive assessments often interferes, for example, with surveys of software users and in other similar studies. If the user as a whole is satisfied with your program, it is difficult for him to dismember it into parts and evaluate the personal account, new functional solution, etc. separately. Most likely, he will score a high score everywhere. Yes, the report on the results of the survey will look very positive, but the results will not allow to really assess the situation.
Average estimates often interfere, for example, with a 360 degree personnel evaluation. Employees tend to put the average score for all competencies: if the attitude to a colleague is positive - you will see overestimated points in the results of the questionnaire, if you have strained relations with a colleague, then even his clearly strong leadership qualities will be underestimated.
In both cases, it is wise to carefully work out the answer options, replacing the usual scales with detailed verbal answers for each individual question.
6. Manipulation of opinions. This point differs from the previous ones in that researchers deliberately push respondents to favorable answers to them for a “successful” report. Frequent ways of manipulating are the illusion of choice and focus on positive characteristics. Usually, managers who study positive survey results do not think about the correctness of data interpretation. However, it is worthwhile to look objectively at the questionnaire itself: what is its logic, is there a questionnaire of a certain line, are there evenly distributed positive and negative answers. Another frequent trick to “tighten” the data is the substitution of concepts. For example, if the majority of employees rated the rewards program as “satisfactory”, the report may reflect that “the majority of the company's employees are satisfied with the new rewards program”.